- Tech Tales & Tactics
- Posts
- 2024: AI's Quiet Revolution
2024: AI's Quiet Revolution
What if 2024 wasn’t about big AI announcements, but about building the foundation for the next tech paradigm?
Looking back on AI developments in 2024, if we are honest, it wasn’t the year of flashy GPT-5 moments. It wasn’t like 2022/23, full of earth-shattering AI inventions. But instead, something was happening quietly in the background. Generative AI laid the groundwork for what comes next, and we are starting to see that it has the potential to reshape how we interact with technology entirely.
Let’s explore what 2024 did for AI’s bridge into the future.
💡Spotlight: AI’s Quiet Revolution
Bridges in Tech - A Short History
Whenever tech evolves in a new leap, there is not just hardware but also an application layer involved that helps shift technology to a new level and unlocks new features and platforms:
Mainframes to PCs: Remember these, which became the first supercomputers in the 1950s? They enabled the development of a whole new level of platforms and unlocked big business.
However, mainframes also enabled a new set of applications that became very interesting for all users. These applications became the bridge to the PC, and personal computers became a thing as people wanted to use these apps at home.
PCs to Smartphones: PCs have undergone many changes and improvements, and applications have evolved. The Internet was the key innovation and enabler of the next bridge.
The Internet enabled communication between PCs and was independent of the actual device. In turn, the Internet provided design changes to applications and a bridge to run smartphone apps always connected.
Mainframe at the Computer History Museum - source: CHM
Generative AI is Building the Next Bridge
So, if generative AI is the next bridge, where does it take us? It's not just making workflows smoother and faster; it’s laying a foundation for different kinds of interactions.
AI has the potential to create interfaces that are aware of context. We provide the context to AI via text, sound, or picture. AI can already respond in many different ways, preferably text and sound.
Since AI understands context, it understands a user's needs in any given situation. Thus, it can provide the exact interface when and how it is needed. I hope you follow my musings here. Let me give an example.
New Platforms - Wearables and Such
Apple’s earbuds are often in my ear when I am not interacting with someone in real life. They are lightweight and barely noticeable for me in my ears anymore. If these earbuds become more aware of situations I find myself in, they can start chiming in when relevant. Based on that, I feel like they are only one step away from being a personal full-time assistant. Maybe paired with a camera. You get where I am going with this.
And what about smart glasses? Google and Meta showed us where they were going earlier this year. There is no need to pull out our phones—glasses can handle the layer that shows us relevant information and blend it with the real world. While embedded microphones and cameras collect all the context needed.
Until now, smart glasses have failed because they still rely on our input for any task through keyboards. The inputs didn’t feel more natural or simpler. With context-aware AI, the need for these manual user inputs disappears.
Here are the demos, if you haven’t seen them:
Lastly, the automotive world will also gravitate heavily toward these AI assistants. It’s the most natural environment, and cars can easily be equipped with all the tech and computing to support this. Heat, battery, and form factor are less constrained than in glasses or earbuds. Drivers do not want to interact with a UI to keep their eyes on the road so they are even more likely to prefer a more natural interaction.
Combine all of this with the ability to recognize distinct hand gestures that AI is trained on, and we have a smooth, complete (hopefully keyboardless) interaction.
Why 2024 Matters for 2025 and Beyond
So there we have it. In 2025, we will see more devices that can greatly benefit from generative AI. I believe it has matured enough now to integrate and test. Some of these hardware devices that were launched earlier were pre-mature.
We talked about this to treat them as “ugly babies” but give them a chance. “The Story of Humane AI, Pixar, and Their Ugly Babies.”
We’ll see far more evolved devices in 2025; they will bring the promised value and make wearables an actual leap in technology for us.
Want to dive deeper?
If you’re curious to dive deeper into this idea, I recommend checking out ‘The Gen AI Bridge to the Future’—an excellent long read by Ben Thompson that sparked much of this reflection.
📖Another Read of the Week
I am the kind of person who tries to squeeze every last bit of toothpaste out of my tube before committing to a new one. And this is exactly how AI researchers on the hunt for training data treat the internet. They are about to run out of data, but there is no new tube…
What are your thoughts? Are wearables powered by generative AI the next big leap, or do you see it going another way? I’d love to know.
Have a great rest of the week, and since you won’t get another email from me before next Thursday, I am wishing you and your families a wonderful holiday season filled with rest, joy, and Gluehwein!
How did you like this edition? |
Reply