In recent days, the technology industry has once again had a few innovations to celebrate. And even though this is a publishing blog, I still want to write this article. Because I have a few questions. But let’s go through it step by step. What have we seen?
The Origami Smartphone from Samsung
First, Samsung – probably under some pressure from the competition – presented the first foldable smartphone-tablet-thing, i.e., the first Foldable. No one knows the exact details, and the device wasn’t really available for live viewing. But what we know from the leaked video: When folded, the thing looks like an early Nokia brick; unfolded, it looks quite elegant like a large beautiful glass pane. It will also cost quite a bit.
The Competition Folds Along
But apparently, origami is a must in 2019, because shortly after, Huawei introduced its Mate X, which in turn seems much more mature than Samsung’s brick. This makes the gadget boy’s mouth water, because the device really looks cool on the posters.
I Have a few Questions
This brings us to my questions. First and most importantly: what is this good for? What are the usage scenarios? Samsung has thought about this and came up with Google Maps: you start with a location search on the small screen and then you “unfold” to reveal the larger view. I can understand this as a need.
But then it starts: By unfolding, all control elements of Google Maps are adjusted and repositioned for the larger screen. I literally lose orientation. Moreover, the device no longer has a clear top and bottom, no clear front and back. By the way, I know this disorientation from the newest iPad Pro, where I never really know where the camera is. And according to Murphy, the display on the screen is actually always reversed. In other words: for the user, handling such a device becomes massively more complex, and the comfort gain in the Google Maps scenario isn’t really that great.
Another question: why should I put myself through this? Such a thing needs a larger battery, it needs more cameras and a myriad of sensors that are constantly trying to understand how the user is holding the device and which functions should now be active. In terms of dimensions, the devices are too big for the pocket and, above all, too heavy. The fact that none of the devices could be handled during the presentation seems to me a clear indication that there are some doubts regarding surface and stability. And especially when you look at the Huawei device: what exactly is the deal with scratching the screen. After all, the display is on the OUTSIDE when folded!
Is it all just a Marketing Hype?
Let’s look at another device from Oppo. Looks elegant if I can just unfold it like that.
But: I unfold the thing (with both hands) and then… do I have to keep holding it with both hands? What happens when one hand lets go? Does it fold back together? Or is there some kind of lock in the hinge? How is it when I use the thing for watching videos (after all, one of the core applications of a larger screen)? Is the display then so stable that I can put the tablet in a stand or even just lay it on my knees? We probably won’t know until the devices actually get into our hands and are not just shown with beautiful advertising images and trailers.
Do We Even Need Phones Anymore?
We have… the Hololens. More precisely, the second generation of this AR glasses, which Microsoft has just presented. Pardon me, at Microsoft it’s called Mixed Reality, and that’s how the future is supposed to look. We won’t need phones anymore, because we’ll all be wearing this disguised bicycle helmet with eye tracking, speech and gesture recognition. With the helmet, we then see our real environment with… well, with what exactly?
Let’s look at the demo:
The lady on the stage now sees texts floating in the room. Or objects that she can make larger and smaller with her hands and push to the side. She sees program windows, e.g., with Microsoft Teams, where she can scroll with her hands. After that, the lady can operate a slider and press a button in front of an animated wind farm. She doesn’t need or have to type, that’s done via speech recognition. At the end, the demo wants to make us believe that you can play the piano with the bicycle helmet. “Playing the piano” in this case means hitting some random piano keys. Mozart is done differently.
All of this is fun and looks great. But let’s be honest: what does it bring us? What can I do with this thing now that I couldn’t already do with a simple computer or smartphone? Why should I play the piano virtually? Why should I operate software in a Windows window – with the latest technology on my head and a well-worn user interface in front of my eyes? Isn’t this just pointless?
Nerd: 1 / Common Sense: 0
Don’t get me wrong: the nerd and gadget freak in me is already starting to save up and can hardly wait to get my hands on these things. Technologically, it’s all great fun. But the rational part of me is wondering a bit if innovation is going in the right direction. Shouldn’t we be paying more attention to our real world rather than creating an artificial one? Where is the communication, the interpersonal, the connecting aspect in all this technical stuff? And why – after the smartphone market is slowly saturating and the realization has set in that we don’t need a new resource-intensive device every few months – do we now need a new generation of devices that can’t really do more, but just does everything differently? The purpose somehow doesn’t unfold for me here.


