I've been using this year's fastest smartphones but I'm yet to find a use case that requires all this extra performance.
When was the last time you felt your smartphone really couldn’t cope? I’m running last year’s Pixel 8 Pro — hardly a benchmark topper — as my daily driver, and just as silky smooth as the more powerful ASUS ROG Phone 9 Pro I’ve been using on the side. But even before that, I struggle to recall the last time an app jerked and juddered into life. Outside of a few niche use cases, smartphone power has been bountiful for a few years now, but to jump up another level, modern manufacturing is making it more and more expensive but with few real-world benefits. We might have already passed the point of diminishing returns, but we’re also paying more and more for the privilege.
This year’s Qualcomm Snapdragon 8 Elite is a prime example; it offers significant leaps in benchmark performance but is said to be considerably more expensive than its predecessor. Likewise, MediaTek’s Dimensity 9400 commands a higher price tag too, yet you’ll struggle to tell a phone packing these chips apart from last year’s model when it’s in hand. Money thrown at higher benchmark scores could be spent on better cameras, newer battery tech, or anything else on your wishlist. Returning to the ROG, the baseline model has dropped the telephoto camera, no doubt to keep up with the spiraling costs of top-tier performance.
With reports that the next-gen Snapdragon 8 Elite 2 will be “significantly” more expensive once again, something has to give; be it higher price tags or sacrificing other key specs. But is that a trade-off any of us are really willing to make?
291 votes
I’m all for extra performance, of course, but there has to be an opportunity to use it. There’s no point in splashing out on a sports car to plod around the inner-city, after all. For smartphones, you don’t need the latest and greatest chip to sail through your daily apps or even hit 60fps in demanding Android games. A further 20% or greater boost to peak smartphone performance feels academic at this point; it’s just not going to make one bit of difference to virtually every use case you could name. There’s more to high-end chips, of course, but you’d struggle to make the case that mainstream flagships are hinging their sales in niches like Wi-Fi 7 or barely finalized 5G technical releases, either.
A further 20% boost to peak performance feels academic at this point.
What these future high-end smartphones need is a compelling use case to leverage all this extra performance headroom. And no, I don’t think AI is the answer there. Consumers are lukewarm on it so far and today’s features run pretty well on modern and even slightly older hardware already.
Handheld gaming feels like an underutilized option — today’s flagships far outpace the beloved Nintendo Switch. But unless you’re willing to work around with Winlator or similar, you’re not playing modern AAA titles on your phone, while most classic emulators will run on a potato. Barring a major change in the fundamentals of Android/Linux and Arm gaming, future mobile GPUs are destined to remain underutilized.
That leaves the oft-promised but never truly realized pocket PC. I could certainly see myself using my handset for work if I could run a proper Linux desktop from my phone. Blowing up my docs and messaging apps on the big screen is already possible to an extent, but letting my phone’s processor rip at full tilt for image editing or managing my vast media library falls far short of a PC-like experience. With limitations like five concurrent apps and meager software options, Samsung Dex — the clear market leader here — has never quite lived up to that potential. I’m keen to see if Samsung can level up Dex with the more powerful Galaxy S25 series, but we’ve not heard anything about it in the new One UI 7 beta, and Linux on Dex is long buried.

Robert Triggs / Android Authority
Perhaps there’s hope that the meshing of Android and ChromeOS or the potential of running Debian apps on your phone as soon as Android 16 could open the door to a broader range of software and use cases that could stretch tomorrow’s high-end chips. Certainly, the Snapdragon 8 Elite is more than powerful enough to serve as a high-end tablet or even a mid-tier work laptop, where photo editing or compiling code could really make use of those beefy CPU cores. Strangely enough, though, we haven’t seen flagship mobile silicon making its way over to Chromebooks, which remains a predominantly AMD and Intel affair. And that points to another hurdle: Linux software support for Arm processors is passable, but even proper desktop support wouldn’t mean we can finally run our Steam library on our smartphones. But it would be a significant step towards catching that all-important developer’s attention.
Extra performance comes at higher and higher costs.
Still, big performance really needs a big display to make use of it, so even more smartphones would also need to embrace DisplayPort over USB-C to make the pocketable PC a real thing. There are a lot of moving parts, then, which makes it seem unlikely we’ll be making the most of these powerhouse chipsets anytime soon. With that in mind, there’s little point in buying a phone for future-proof performance.
Instead, perhaps Qualcomm’s rumored Snapdragon 8s Elite and other upper-mid-range chips will be more than sufficient for everyday smartphone performance and better suit our readers’ preferences for longer battery life and more affordable smartphones. Better battery life and a lid on prices constantly poll as your two most desirable features of a modern smartphone, and the features over power idea certainly seems to work well enough for Google’s Pixel. Whether other flagship OEMs will see things the same way is probably much less likely, but Redmi/Xiaomi’s general manager was surveying fans earlier this year as to whether they wanted to keep following “upstream” costs or make phones more affordable. So, consumers aren’t the only ones pondering the cost/benefit ratio of modern hardware.
The ChromeOS merger and running Debian apps on Android hints at future powerhouse use cases, but we'll have to see.
The days of a new phone feeling notably snappier than your previous model have been long gone anyway, yet it seems odd to be turning my nose up at overkill performance that we could barely imagine a decade ago. But it’s the compromises, particularly in terms of squeezing product costs, that are presenting the industry with a tough choice. If there are compromises to be made, I’d much rather take the more expensive cameras, cutting-edge battery cells, a no-compromise approach to build quality, or just a freeze on prices over the extra performance I cannot realize.
Until a compelling new use case comes along that makes my current smartphone splutter, I’m increasingly convinced manufacturers should take their foot off the throttle a little and settle for more modest yearly gains. They could even start really focusing on other hugely important and popular aspects of a great smartphone, like better battery life.
Absolutely warranted, but there is no reason to worry, apps will keep up with the performance development and in a blink of an eye you will be forced (security, security security!!) to upgrade to the new versions of phone app or photo app dragging in 'free' and top notch AI assisted VR capabilities or whatnot that require power beyond current levels. Do you need that feature? Absolutely not. Nevertheless, you will have it. Rumours say MS is working on some sort of AR into Teams, which is just a random example of trigering "... but why?" Ryan Reynolds meme look. Wasting of resources is always a good reason for pushing up the performance ceiling.
> apps will keep up with the performance development and in a blink of an eye
Web apps gang here.I don't know about the rest of you folks, but I rarely install an app unless I have to or there is no other option.
The few apps that do get installed aren't all that taxing.
I almost never install an app unless absolutely required to either, but you talk as if web apps weren't some of the worst offenders in the matter of performance wasted.
Strange that the developer sphere of the W3 industry is not dominated by, you know, W3.
<warning snark-level="epic" generalization-level="expansive">
Perhaps if they weren't gluing together unaudited components, with a programming language designed to obsolete the blink tag, using the most expensive laptop on the planet they'd be less profligate with a users resources.
</warning>
Things are no better in the game industry and the days of Avie requiring engineers to develop on hardware more representative of users machines are long gone.
There's practically a limit. JavaScript is a fairly limited as are DOM and CSS. This isn't like 3D graphics.
These limits are not immutable, web standards grow by the day, keeping up with the performance development as much as the apps themselves. There's already WebGPU, so 3D graphics already is a thing on the web. Also, case in point, the built-in CSS filter effects get pretty taxing on not-too-old mobile hardware, as I've lamented yesterday [1].
3D has long been a thing, but again, as a matter of practicality, it's use is quite rare for a number of reasons.
I never understood why one app has to cover everything. Apps are bloatware today. Same goes for Windows. All I wanted worked on 98SE or XP. How did OS get from 1GB to 30GB for listening music, surfing, some office, watching pictures? Ah yes, teams, onedrive, defender, firewall (I have my own hardware why do I need this forced) and other cloud integration no one asked for and if they ever forced only online account, that's where I draw the line with MS and I'm sure they will do it soon. They badly need and want our data.
I asked for it. A lot less time spent doing tech support when backing up and restoring a device is as simple as logging into an iCloud account.
Obviously, it shouldn’t be mandatory, but the ease of use surely benefits the majority of the population. Having to reinstall the operating system every now and then was not tenable.
> but the ease of use
You mean the ease of restoring Windows. Why do i have to click 2 times in Win 11 to select another window on the taskbar ?
No clue, haven’t used Windows in over a decades.
But I mean ease of use, overall. Before, you have issues with your computing device, you have to troubleshoot how to get data off of it, transfer it, possible avoid malware, maybe have to pay someone to help you.
Now, you set it to backup to iCloud, and if something happens to the hardware you buy a new device and login, and you’re good to go. Or if it’s software, you might have to reinstall (I never have had to).
Windows is not an OS any more, it's and advertising platform.
With all of the features and plug-ins being crammed into Teams, it may become TeamsOS before long
Bought a mid range Samsung A55 and it does everything I want.
But then I have a gaming PC and a big television at home.
It's funny, as my phone (A 2022 Moto Razr) can work as PC, if I plug it into a monitor with it's USB-C port. I can plug it into a monitor, and plug a mouse and keyboard into the monitor's inbuilt USB-C hub, and it works just fine. Has a desktop mode and everything! If the monitor doesn't have a hub in it, I can use the phone as a mouse/touchpad! Plus, if the monitor supports it, it'll even keep it charged rather than using the battery!
And I don't just use it as a gimmick, I use a HDMI/USB-C cable to use it with my TV as a streaming/light gaming setup. Nice to be able to plug it in, kick off a streaming app or Youtube, or play some Minecraft or something on my TV in bed, all comfy.
Can confirm, S22 ultra when plugged into Dell docking box (or whatever its called, not a typical docking station, it just connects with laptop via thick USB-C cable) works out of box, with mouse and keyboard.
Firefox with ublock origin works very well for example. The only thing is it doesn't adjust automatically to native screen resolution (1600p in my case). But its still just an Android, even with full filesystem access it feels vastly subpar to normal desktop PC if I need more than just browsing or other android apps.
My dream was to be able to use VR glasses and something like Samsung dex for an ultra portable coding workstation.
I bought a pair of Viture Pro glasses, but they were pretty unusable for coding for me. Maybe for watching videos would have been OK but not typing / needing to read all areas of the screen.
I am thinking of using it as a remote desktop to connect to my home or server PC. Shouldn't be very CPU consuming.
Yeah Firefox on DeX also lacks pc style tabs which Chrome does have. It's the only case where I really use chrome a lot.
Chrome even switches to tabs when you unfold the Fold phones, which is really nice.
How secure is it? Will Samsung or others be able to look over your shoulders?
Boy do I have some bad news for you: Automated Content Recognition [0, 1, 2]. If your Smart TV is connected to the Internet, it can also track what you're watching or doing, even if you're using it as an external monitor [3] (in Dutch).
[0]: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[1]: https://news.ycombinator.com/item?id=41658828
[2]: https://news.ycombinator.com/item?id=41669765
[3]: https://tweakers.net/nieuws/227186/samsung-automatische-cont...
We're talking about phones here, not TVs.
Or are you saying this is built into phones too?
TL;DR: I'm of the opinion that the answer is probably "not yet", "it's in the works", or "it's already here, but not yet widely known".
In short, I couldn't find strong conclusive evidence for "yes" or "no".
The Wikipedia article on ACR [0] seems to be quoting CIO-Wiki [1] --- or vice-versa. The statement would imply "yes":
> Real-time audience measurement metrics are now achievable by applying ACR technology into smart TVs, set top boxes and mobile devices such as smart phones and tablets. This measurement data is essential to quantify audience consumption to set advertising pricing policies.
On the other hand, a paper on ACR [2] implies it only occurs on TV's (so, this points us towards "no"):
> [...] Unlike traditional online tracking in the web and mobile ecosystems that is typically implemented by third-party libraries/SDKs included in websites/apps, ACR is typically directly integrated in the smart TV’s operating system. [...]
... but then, in its conclusion one could make the case for "not yet" as they reference Microsoft's Recall (this, to me, makes me lean on "not yet"):
> [...] Finally, although different than ACR, our auditing approach can be adopted to assess privacy risks of Recall (Microsoft, 2024) – which analyzes snapshots of the screen using generative AI (Warren, 2024). [...]
Collecting my thoughts on this paper, I'm a bit disappointed that we seem to have a double-standard for the nomenclature: if the content recognition happens on a PC, then it's labeled as "generative AI" (should've probably been called LLM by the authors) and if it takes place on a TV-shaped computer (they're mostly Android TV's, after all, right?) then it's called ACR. I think that it has not been properly articulated that what people are worried about [3] is that Microsoft's Windows Recall is (or will become) "ACR with extra steps".
To conclude (and extend this to the mobile phone domain), I'll leave a "thought experiment": is all the AI processing power on new mobile phones going to be used exclusively by the users, and for the users?
-----
Some nuanced notes...
I'm conflicted about whether to demonize ACR entirely or not. To me, "ACR" means something that is running all the time listening to user's surroundings or screenshotting a user's displayed information for the purposes of improving targeting or tracking their behavior (this seems to match Wikipedia's definition at first glance). I am in part validated by [2] as well:
> [...] At a high level, ACR works by periodically capturing the content displayed on a TV’s screen and matching it against a content library to detect the content being viewed on the TV. It is essentially a Shazam-like technology for audio/video content on the smart TV (Mohamed Al Elew, 2023).
However, after doing some research, I discovered that a particular knowledge field may be misusing the term (or using the ACR term for lack of a better term like "reverse image search" or "content-based image retrieval" --- CBIR, CBVIR, QBIC --- in their vocabulary), and perhaps in the process inadvertently "whitewashing" the term.
Take, for example, the European Union's Intellectual Property Office's (EUIPO's) discussion paper titled "Automated Content Recognition: Discussion Paper – Phase 2 ‘IP enforcement and management use cases’" [4] (PDF). I think that they are conflating some terms like hashing, fingerprinting, watermarking and labeling it under the ACR term, then they're making valid-sounding use-cases like "smartphone solutions to detect genuine or counterfeit products" (products, by definition, are not content,... so I fail to see how ACR ties in). Perhaps someone more knowledgeable can correct me if I'm misreading the paper (I am no IP lawyer, but have worked as an Information Security Officer).
I think the EUIPO paper also glosses over some possible privacy implications: e.g., they link to an article called "Are 3D printed watermarks a “grave and growing” threat to people’s privacy?" [5], but in the context of using "RFID tags or serial numbers" to protect IP on 3D printed objects ... they do not discuss the possible privacy implications of, for example, being tracked by a possible "RFID-tag-cloud" of such objects. I know that this is beyond the scope of "is there ACR running on mobile phones", but I wanted to showcase what I think is the misuse of the ACR term to expand into the physical --- "offline" --- world, in the process losing its more "academic" meaning.
[0]: https://en.wikipedia.org/wiki/Automatic_content_recognition
[1]: https://cio-wiki.org/wiki/Automatic_Content_Recognition_(ACR...
[2]: https://arxiv.org/html/2409.06203v1
[3]: https://www.windowscentral.com/software-apps/windows-11/micr...
[4]: https://euipo.europa.eu/tunnel-web/secure/webdav/guest/docum...
[5]: https://3dprintingindustry.com/news/are-3d-printed-watermark...
That's a really longwinded way of admitting that you don't know lmao.
Why would it be connected to internet if used as external monitor? Just don't tell it the wi fi password.
Sorry for the delay in my response.
To answer your question directly: I'm pointing out unexpected privacy pitfalls of using a smart TV's full set of features (i.e. running apps and using it ... as a monitor).
Although I agree with the point of your solution... I disagree with minimizing the danger of such anti-features.
To elaborate, try thinking of your average reasonable person and think of their journey into learning how to preserve their privacy without losing access to the features of the services and products they have paid for. Without a massive effort it is ultimately an oxymoron.
A reasonable person would expect that your (internet connected) smart TV would collect info to help them tailor future products based on their customer's usage (app usage frequency, standard or cable usage frequency, frequency of usage as external monitor). You would not expect to have to watch what you say in front of the such a device because they're literally listening to you [0] (in 2015, you needed to use the remote to use the voice detection service).
Additionally, reasonable user's of smart TV's (and other IoT devices) might feel like they are no longer tracked with their uniquely identifiable information because they turned off "targeted advertising" (if the service allows for setting that option), but that only prevents their advertising ID from being tracked [1].
Moreover, a reasonable person might expect that using a DNS-based blocklist would be a sort of "revocation of consent" to being tracked, but tracking services are savvy when it comes to PII exfiltration [2]:
> [...] We find that personally identifiable information (PII) is exfiltrated to platform-related Internet endpoints and third parties, and that blocklists are generally better at preventing exposure of PII to third parties than to platform-related endpoints. [...]
Finally, there have also been studies that show a lack of transparency when it comes to GDPR requests about the data collected through Automatic Content Recognition (ACR) [3].
So, my point is that "just don't use your product for most of its intended use" might be a thought-terminating cliche that prevents us from taking a step forward in stopping the normalization of unreasonable privacy transgressions (PII exfiltration, audio spying by third-party service providers, monitoring of external devices' screens).
[0]: https://www.bbc.com/news/technology-31296188
[1]: https://www.theverge.com/2019/10/11/20908128/smart-tv-survei...
[2]: https://arxiv.org/abs/1911.03447
[3]: https://www.ucl.ac.uk/electronic-electrical-engineering/news...
Not the ones that compete with the aforementioned Razr, unfortunately.
The latest Z Fold6 can do DeX but its a bit hidden, not official.
The Fold4 switches to dex as soon as you connect a monitor. I wonder why they made it harder to use?
Ooops. Sorry.
I meant the Z Flip 6. Not the fold. The folds have always had fully supported DeX.
The Flips did not have a DP-capable USB-C port until the Flip 5, and still did not support DeX due to thermals. But the Flip 6 has it with a developer option, but only the "new" DeX.
Sorry for the confusion on my side. I thought of the Flip as the OP mentioned the Motorola Razr which is positioned against that, not the Fold.
What temperature does your phone reach when you do this for an hour?
Phones perform thermal throttling before getting too hot. The question thus becomes: how does it perform after one hour?
Not great) Tried replacing my laptop with a Samsung phone+monitor combination on a trip, didn't really work out. Phones are not built for continuous load.
The official DeX docks have a fan built in. This helps a lot especially if you take the phone out of the case. I need to do so anyway because the usb doesn't go in deep enough without it.
dock + fan + monitor already weights more than a laptop ..
Umm yeah but I don't travel with those. I keep them on site. And just plug in my phone when I get there. I have one set up at every place I am a lot.
Yeah, I used to work on what was eventually sold as the Meta Quest VR headset and we all knew that thermal throttling would be a major sticking point.
Thankfully my job was all about reducing rendering latency to prevent people from getting nauseous, so thermal throttling was outside of my scope.
I had a Huawei P20 Pro that did much the same back in 2018.
I never really used it for much, a bit of light browsing and really just as a gimmick, buit yeah, there was a desktop of sorts and you could use all the apps, and the touchpad/mouse thing worked. You could attach a bluetooth keyboard too, IIRC.
Kindof a shame my iphone doesn't do this (I assume, I haven't tried), but I'm not sure if I'd use it.
iPhone has good support for peripherals, hubs and external displays, but lacks a desktop mode.
Actually, I use my iPhone with a USB-C/HDMI cable, the Remote Desktop client and a Bluetooth keyboard when traveling. Some apps will let you use an additional display just fine.
Oh interesting, I suppose I have had it wirelessly project to the tv before...
I might give it a go when I upgrade to a USB-C model.
OK so I've now tried this with a new USB-C iphone.
Yeah it's painful to use! You can set up a mouse, and use a physical keyboard for input, but it doesn't attempt to do any more than mirror the screen onto the external device by default.
Huawei's desktop mode was limited, but I think you're right - you can say the iphone has good device compatibility, but there's not a good way to use it docked. Not that the android ones were 'good', but they made an attempt!
Yes: https://support.apple.com/en-sg/guide/iphone/iph95baac91f/io...
Works fine. I don't know if you can format drives, but you can definitely read and write to external disks and network shares.
Yes, if it's USB-C. You won't like the experience though.
Which is quite frankly weird, given that the iPad has fairly robust mouse/keyboard support at this point, and at least some nods towards window management
Interesting, I wouldn't mind an Android phone that can do similar, but I'm not looking for a clamshell. For anyone else who, like me, is naive about such things, the key search terms seem to be: DisplayPort alternate mode over USB-C. Support seems patchy.
I hate USB-C for laptop charging ports, to fragile for regular use. However I build a few things recently and I love the simplicity.
- External Touch Screen - only needs one cable, usbc, for picture, sound, touch, power! ... (DP mode you mentioned required)
- As power source. My caravan computer (Dell wyse 5070) uses usbc as power source with a cheap DC slot adapter. My laptop charges in usb-c from 60w or more.
- We have 2 Rolands (p-1, s-1) both can use their usbc cable for direct audio in AND out which just works on Linux.
- For the Roland's I can use my phone as sound DAW or source, or both. I can also attach the touch screen, ...
All using the same (cheap and available) cable. Which is amazing and took my whole life to get to.
FWIW, apparently Google shipped software support for DP Alt Mode for Pixel 8 and newer a few months ago.
I find it highly annoying that these rather powerful handheld smartphone computers don't have decent port access for use with instrumentation, etc.
A huge range of features comes to mind from Geiger counters, oscilloscopes, sound level measurement, light intensity, etc , etc. The potential to expand the smartphone capability is enormous yet no manufacturer has tackled it. Why not?
Why aren't there multiple USB ports? By now why don't all phones use USB-3? Why isn't there general purpose D/A and A/D ports/outputs for instrumentation? Why don't they include a GPIB-like bus to connect to things? Why can't we use the screen as an oscilloscope with a bandwidth of say 100MHz?
Of course not everyone needs these features but the smartphone stands out as an ideal device for use in instrumentation and measurement and data collection (of the other kind).
I find it amazing that no smartphone manufacturer has branched out into this field. Such potential and no one is servicing it. Phone manufacturers are missing out by not servicing this scientific/techie measurement market.
Why, say, doesn't Fairphone provide a range of interchangeable ports/modules that can changed for different functions, to add additional sensors, etc.?
> Why aren't there multiple USB ports? By now why don't all phones use USB-3? Why isn't there general purpose D/A and A/D ports/outputs for instrumentation? Why don't they include a GPIB-like bus to connect to things? Why can't we use the screen as an oscilloscope with a bandwidth of say 100MHz?
Because the population of people that would actually use that functionality rounds to approximately 0. There do exist phones with multiple USB ports, and there do exist plenty of USB3 capable phones. Instrumentation and measurement is an extremely specialized field, and the number of people that would maybe find use out of it would quickly switch to a more useful interface for something like an oscilloscope.
For your general purpose adc and dac, they already make one: it’s called a usb-c audio adapter.
Also, there is a small market for people who don't want a laptop. Or a Raspberry Pi. I bet a Pi with display would make a better if slightly larger.
It is also strange to complain about multiple ports when can get a USB-C hub. It used to be they were all USB-C to USB-A but there are starting to USB-C only hubs.
"It is also strange to complain about multiple ports when can get a USB-C hub"
First, a USB hub is bulky and frankly a damn nuisance to carry about, it needs to be integrated.
Second, USB-C/OTG on many phones is implemented in a way that makes it essentially useless. For instance, USB-2 (which is on most phones) is too slow by miles; access to external devices via OTG is often set deliberately to time out after say 30 mins or such, also on many devices permission to access OTG is awkward, and the vast majority of phones do not support NTFS for external drives or internal SD cards.
Frankly, is this is a first-class fucking nuisance. Why can't I have direct compatibility with my PCs and laptops?
To get my phones to do what I want I have to root them and even this isn't fully satisfactory. Rooting is a pain and takes a lot of time to do it correctly and I'd prefer not to do it
With OTC there is no consistency across phone manufacturers. Why the hell not?
Also, why are phone manufacturers removing micro SD slots from phones?
More on NTFS, why doesn't Android support NTFS after all this length of time? After all, the Linux kernel now does and has done so for some time, so why has Google nuked it from the Android kernel?
Now if I go to that despised Chinese company Huawei I can get NTFS support by default (on OTG at least). That Huawei can offer NTFS as a standard feature and most others do not tells me a lot about the oligopoly-like smartphone market.
People here have been criticizing me and voting me down because I've had the hide to suggest features for specialized phones but no one seems to bother addressing the elephant in the room which is that the smartphone market has reached stagnation.
There's been fuck-all worthwhile innovation in recent years.
> no one seems to bother addressing the elephant in the room which is that the smartphone market has reached stagnation.
There was some 10 years ago a project from Google to make the smartphone modular (Ara ?). It died in its infancy.
> First, a USB hub is bulky and frankly a damn nuisance to carry about, it needs to be integrated.
First, multiple USB ports are bulky, and frankly a damn nuisance to carry about. I don't need an extra port for the majority of my uses.
> Second, USB-C/OTG on many phones is implemented in a way that makes it essentially useless. For instance, USB-2 (which is on most phones) is too slow by miles; access to external devices via OTG is often set deliberately to time out after say 30 mins or such, also on many devices permission to access OTG is awkward, and the vast majority of phones do not support NTFS for external drives or internal SD cards.
The timeouts are for idle time. If you have a long period of idle time, you aren't using the device... which consumes power from your tiny phone battery. It's very reasonable for the uses most people use them for. I'd agree it would be nice to have the ability to disable the timeout, but I can't speak to what every phone manufacturer is doing.
> More on NTFS, why doesn't Android support NTFS after all this length of time? After all, the Linux kernel now does and has done so for some time, so why has Google nuked it from the Android kernel?
What's wrong with exFAT? It's an external hard drive. Better compatibility with everything anyways.
> People here have been criticizing me and voting me down because I've had the hide to suggest features for specialized phones but no one seems to bother addressing the elephant in the room which is that the smartphone market has reached stagnation.
And what exactly is wrong with that? Laptops also haven't had "innovation" in the sense you're describing in years either. They serve their purpose, do what they do well, and get marginally better year over year. It's fine.
"What's wrong with exFAT? It's an external hard drive. Better compatibility with everything anyways."
One of the reasons why my reply is late is because of exFAT problems. Right, I don't expect you to believe that but it's true—see my comment at the end.
exFAT may have better compatibility but it's about the worst file system ever invented. Have you ever thought why Microsoft made it freely available and not NTFS? Yes, everyone believes the MS mantra that exFAT uses fewer resources than NTFS and that's true but it seems few are aware about how diabolical this file system actually is and the high potential it has for losing one's data.
Why? Well it has only one FAT table and not two, clobber that and one is stuffed big-time—and often many people lose data this way.
Why would Microsoft eliminate the second backup FAT table in exFAT when it was proven so valuable in earlier versions of FAT—especially given exFAT's higher capacity where the loss of data would be even more disastrous? (Even Blind Freddy ought to be able to see the necessity of having a second FAT to protect one's data.)
Let me give you an example: about 12 months ago I was transferring some data stored on my smartphone's 512GB microSD card to my PC when I lost about 231GB of data! That's no small loss and I've still not recovered it.
You may well ask how that happened. Simple, the SD was removed from the phone and placed in the PC's USB slot to move a small percentage of files to the PC. Unfortunately, I removed the SD before the write process had completed and it clobbered the FAT and everything was deleted, the card was not only devoid of all files but also according to Windows it was now unformatted.
OK, so it was my fault, that I accept—doubly so because I didn't follow the golden rule of copying everything first before deleting the source files (although in this case that wouldn't have saved the files that I'd not moved).
I tried the usual unearase utilities/procedures and only recovered shrapnel. Of course, what else would one expect when file systems don't store files in contiguous sectors. This is yet another antiquated idea where data integrity is traded for speed without adequate fallback/safety protections.
You probably are asking why I removed the SD from the phone instead of transferring the data by OTG. That's easily explained too, OTG on phones is inordinately slow except for the very few that use USB3—not to mention the fact that Android (especially so since v10) won't allow one to copy data from say the Android directory. (In this instance, even though I wasn't copying all the files there were enough to make removing the SD to provide a worthwhile saving in time (it had over 300k files stored on it).)
Fortunately, most of the files were already backed up so only a small incremental amount of nonessential data was lost and I've put the SD aside until I get around to mirroring it in case I ever want to recover them. Incidentally, this isn't the only time I've killed an exFAT's table but it's the only time I've lost data (other times the data was already backed up). I'm not alone I could tell stories of others who I know personally who've lost data in similar circumstances.
I've experimented with exFAT both on SD cards and SSDs and have come to the conclusion that if one wants to kill all data on such a dive quickly without secure delete so it looks like a new drive then all one has to do is to disconnect it during a write operation. It's that catastrophic.
Now comparing exFAT with NTFS is like chalk and cheese. If I'd been using NTFS then none of that would have happened. NTFS is a proper journaling file system with good inbuilt protections, it's hardy and will take much abuse before significant data is lost. Moreover, the argument that NTFS uses large resources and overhead is now mute—we're long past the days of floppy disks and pissy little processors.
If you think I'm whingeing about this without due reason then I'd suggest you ask yourself why some USB thumb drive manufacturers pre-format large capacity drives (>32GB) in FAT32 when Microsoft limits FAT32 formatting to only 32GB in Windows. Good question. I'd suggest they're well aware of the dangers of exFAT and how easy it is to lose data when using it. The answer is obviously an economic one—they want to minimize customers returning drives after losing data and or not wishing to develop a reputation for having flaky drives.
Now ask yourself why does Microsoft force users who format drives larger than 32GB to exclude using FAT32 and use either exFAT or NTFS yet still provide Windows with the facility of reading FAT32 drives with much larger capacity.
Also the question remains why Android doesn't automatically support NTFS, especially so nowadays given that the Paragon NTFS file driver is an integral part of Linux. There are multiple reasons for this some of which are known publicly, others we can only speculate about. Similarly, the reason why many manufacturers have removed SD cards from phones but that's a separate matter too big to address here except to say their excuses are so weak they're just pathetic.
One fact remains certain, none of the big manufacturers gives a damn about integrity of users' data despite all the palaver and noise over security, hackers stealing data etc. If they did then they'd be just as concerned with data entropy† no matter what its source—but they aren't. I could say much more but this post is already long enough.
BTW, my Huawei phone (which I no longer have as I'd dropped it and broke the screen) was very handy. It still used exFAT for its SD card but its OTG supported NTFS by default. Moreover, it used an excellent NTFS driver in that even on USB-2 files could be copied very quickly to an external drive. To transfer files to my PC I used to couple a 1TB NTFS-formatted SSD via OTG and it worked perfectly (also OTG provided enough power to run the SSD without effort).
It's little wonder so many were pissed off with the restrictions over Huawei as the company's products work extremely well. It's a shame other manufacturers don't follow suit.
Finally, this reply is late because my phone's SD (a 512GB Samsung SDXC Pro) could not be read after I'd done multiple file transfers from internal memory to the SD (aftwewards the SD couldn't even be seen). The problem occurred almost the same time as your post.
Fearing the worst I immediately shut the phone down and moved the SD to the PC where I found the card and its files 100% OK. I then wasted considerable time copying the files to the PC because many files exceeded the 260 file/path-length limit in Windows. Right, another ridiculous historical artifact that Microsoft and others have not yet fixed (same goes for the ongoing limitation with reserved characters, why can't we use say a '?' in filenames when clearly it's possible?).
Why so many users simply accept this unacceptably shitty and ergonomically terrible tech without complaint just beats me.
Clearly, you're one who is actually satisfied with your tech.
† Same goes for the unacceptable and irresponsible way Microsoft has implemented the SSD Trim function in Windows. If it isn't obvious I will provide an explanation.
There's fantastically few phones with multiple USB ports. Some of the Lenovo Legion gaming handhelds are the only ones I can think of.
But we are seeing some signs this might change. And insure hope it does, in a big way. A hub can be ok, but with the need to mix display out & peripherals and power, in fancy ways, USB3 really is a limited option. And alas USB4 fixes many of the constraints but is way too high end alas alas alas. Anyways here's an upcoming very cheap tablet with multiple ports, and reports are a lot more are coming. Given the marginal cost of ports, it's about frelling time! Do it! You won't ever have adoption if there's no (or almost no) option! https://www.techradar.com/pro/this-tablet-has-a-genius-featu...
> Why isn't there general purpose D/A and A/D ports/outputs for instrumentation?
There is one, it's the headphone jack. (Or dongle.)
Is that autocorrect? Because I have no idea what dedicated means here.
It's obviously useful, that's how Square launched their card reader business.
Can you access the ADC in a non-audio context? If not then it's dedicated to audio. The Square reader is like a telephone modem, presumably some app then listens to the "microphone"?
A small correction here: 90%+ of audio I/O ports aren't general-purpose. This is because there are almost always DC-blocker circuits on each output, commonly a series capacitor. With very few exceptions, you can't use your soundcard to provide an accurate DC output such as a control voltage.
"Because the population of people that would actually use that functionality rounds to approximately 0."
How can you say that, where are your figures/stats/evidence?
For starters I'm 1 person, so the market is not O. And I know of others, and I know I'm not alone. Clearly you don't work in test and measurement.
Moreover, this article alone has raised the matter of additional features.
Anyway, it's only a matter of time until some (likely small) manufacturer breaks the boring mold and steps out. That's inevitable because the market is already saturated with phones that all have exactly the same features.
> And I know of others, and I know I'm not alone.
Children who play Fortnite on the go are a much bigger market and will always be. Smartphones are content consumption devices.
"Smartphones are content consumption devices."
The point I put forward is what I and a small select section of the market wants as features—NOT what's on offer from manufacturers now.
The argument people are putting here is that manufacturers would not serve that market. These are two separate issues. Isn't that clear?
When someone makes what I want then I'll buy it. BTW, I've not suggested anything that cannot be made now with existing technology.
Instead, manufacturers are removing important features such as not including FM radio and 3.5mm headphone sockets. These are the first two specs I look for on a phone before I buy it. If they're not included I'm not interested. Full stop!
In brief, the lowest common denominator is NOT what I want. The market has to be more diverse. That in part is what the article is about and why I commented.
The tiny minority of people who want this do not form a market large enough to bear the amortized engineering and manufacturing cost of adding those niche features.
Imagine somebody asking for phones that have a built-in Swiss army knife because there is a (tiny) segment of the market who would benefit from it.
Before it makes sense to integrate these features to the phone, you would expect to see a thriving ecosystem of third-party external dongles providing the same, for example.
The market is not more diverse because there is no money to be made by making it more diverse. You underestimate the cost just as much as you overestimate the size of the market segment that wants those features.
"The tiny minority of people who want this do not form a market large enough to bear the amortized engineering and manufacturing cost of adding those niche features"
I beg to differ, Fairphone thinks it's economically viable to make phones that are upgradeable by changing modules. The company is doing it now!
All I want is a spare slot inside one of these phones that I can insert a specialized module. There would be no problem in getting specialized manufacturers to make those modules, witness the fact that there are already thousands of small modular devices on the market already.
If Fairphone were to provide a slot I'm damn sure there'd be call to use it. Think Raspberry Pi and all its ports, just transfer the concept to a phone that's almost suitable now.
> I beg to differ, Fairphone thinks it's economically viable to make phones that are upgradeable by changing modules. The company is doing it now!
Fairphones devices are, frankly, bad value and not particularly interesting. Being able to “upgrade” components is not very useful, if those components are already years behind. Also, fairphone doesn’t allow this! You just can replace components with the same kind… which just makes repairing easier.
> All I want is a spare slot inside one of these phones that I can insert a specialized module. There would be no problem in getting specialized manufacturers to make those modules, witness the fact that there are already thousands of small modular devices on the market already.
You have one. It’s called the USB-C port. Make whatever you want with it, it’s widely supported and compatible.
> For starters I'm 1 person, so the market is not O. And I know of others, and I know I'm not alone
Of all the people you interacted with in any way, shape or form this year, what percentage of those would benefit from this?
It rounds to 0, assuming you’re not a shut-in
[dead]
Everything rounds to zero on a big enough scale.
The point you are making is irrelevant as it has nothing to do with my point which is what I want as features on my phone. Whether manufacturers make them or how many other people may want the same as I do is a separate issue.
What phone do you use now and are you satisfied with it?
Tell me that and I may be able to then figure where you are coming from. BTW, read my comment to f6v.
You asked:
> How can you say that, where are your figures/stats/evidence?
I responded with:
> just look outside in the real world?
That’s totally relevant to you asking why no mass-market phones support increasingly niche features that ~0% of the population need or want, and you expanding on this by saying “me and one other person I know want this”…
Read my reply to david-gpu.
I can't make it any clearer, sorry I don't speak Klingon.
> The point I put forward is what I and a small select section of the market wants as features
Or
> “Because the population of people that would actually use that functionality rounds to approximately 0."
> How can you say that, where are your figures/stats/evidence?
Pick one
> For starters I'm 1 person, so the market is not O. And I know of others, and I know I'm not alone. Clearly you don't work in test and measurement.
I said approximately 0, when compared to smartphone sales… which it is. I don’t need a survey to know that, since the vast vast vast majority of people using phones don’t even know what an oscilloscope, a DAC, or an ADC is. If you think that’s untrue, I’d suggest widening your horizons a bit.
Also, I’d think that you don’t work in test and measurement. I use test equipment in my day job and I wouldn’t trust the output of a phone if I’m doing actual production work. You need calibrated equipment for that. Maybe okay for debugging, but there’s tons of cheap measurement equipment that works just fine for general purpose debugging and has a much better UI than anything you’d get on a touch screen.
To expand upon that point: a new feature for smartphones that really takes off has to fall in one of two categories:
1. so incredibly convenient to always have with you, that everyone's willing to overlook shortcomings compared to dedicated equipment. Prime example: camera.
2. Offer a new type of use that is widely considered desirable. Example: mobile access to Internet.
Most use cases either cater to too few people and/or fall into the category "those who'd really care, already have dedicated equipment which is better". With flagship phones costing more and more, that equipment is probably also cheaper - or, at least, is price tag is not that outlandish.
Even replacing laptops probably won't catch on, simply because companies can easily provide a good laptop for half the price of a flagship phone (if not less). So they're not going to facilitate that. And if the boss isn't on board, would you want to use your own private phone as your primary work laptop?
Does it have to take off though?
The GP seems to be basically saying the same thing as many others have expressed. They want their phone to basically be a PC. Whether that involves upgrading, installing your own OS, or otherwise just being able to use it for an arbitrary workload.
I don't know why the market is such, that expensive phones remove features, such as extra sim card slots, sd card slots or head phone jacks. It doesn't seem impossible that Samsung could find room in their lineup of 8000 phones to have a ruggedised phone, with some kind of standardised interface on the back.
especially, as increasingly you go to a restaurant and the waiter has a phone/ tablet. and I'm sure there are many other industries that could do with basically a phone, that does one extra specialised thing, be that an rfid scanner, or a bore scope, or a label printer.
Why have these features built in if you can have a usb back case/device with them?
There are many back cases for specialised functionality like PoS, Ir camera etc.
Also, with many cases you can have devices connecting wirelessly via bluetooth - e.g. I bought a bluetooth trichiscope recently.
Making a specialised phone instead of a plugin is a way more expensive option. And in risky if the market is so small that nobody did a plugin first.
There were I think two companies that tried to build a modular phone where you could eg replace camera with a module of your own design - the issue was that they were bulkier and more expensive than a regular design - you can’t cheat physics.
Also, how much more would you be willing to pay for such a phone? Could you pay 2-3x the price and be happy with upgrades every 3-5 years, and the phone having electronics that are 1-3 years behind the top of the line ones on the start? Because that’s the reality of production with niche products.
I agree with you except for one point:
> Even replacing laptops probably won't catch on, simply because companies can easily provide a good laptop for half the price of a flagship phone
You’re comparing top of the range phones to low end laptops. A low to mid range phone can be bought from a reputable manufacturer (Samsung) for about £200 and it has plenty of processing to do email, video calls, PowerPoint, and basic spreadsheets. My wife works for the government and I’m pretty sure one of those phones would be about as responsive as the hunk of junk they provide her with, and for half the price.
Phones are also often paid for too. If you had a keyboard + screen + battery and could just clip your phone in that would feel like a pretty nice setup.
Bonus there is that it's one place for all your data.
Alternatively, on the data side I've been shocked at how cheap storage is now. I bought a 256gb usb stick shipped for £10. I've got tiny 4tb external drives. You can get terabyte microsd cards! It'd be pretty nice to have a setup where the device is intended to be blank and you just pop your data card in. I know it's more complex but not that complex for what would to me feel like a fairly sci-fi thing.
You're right about phones also being provided. With that in mind, I'd say: if your company and other places of business you tend to visit all go for this concept, I think it could work. But as long as the boss expects you to work on a laptop, this'll remain a niche application.
Honestly, I'd love to be surprised and see everyone switching to docking setups everywhere. But I just think the positives over current working modes are too small to gain the needed traction. And that's before considering downsides other than investment costs/overcoming network effects.
"Also, I’d think that you don’t work in test and measurement"
I've worked in one of the prototype laboratories of one of the biggest electronics companies in the world where we actually developed communication equipment.
Using bench type instrumentation has nothing whatsoever to do with what I'm talking about. At no time did I ever say that my portable device was a substitute for professional test equipment. The idea is preposterous.
Given your comment, one has to ask what you do and at what level.
Conflating ideas and 'reading' stuff that isn't actually there is the single biggest problem with the internet. .
> Using bench type instrumentation has nothing whatsoever to do with what I'm talking about. At no time did I ever say that my portable device was a substitute for professional test equipment. The idea is preposterous.
>> Of course not everyone needs these features but the smartphone stands out as an ideal device for use in instrumentation and measurement and data collection (of the other kind).
Your words, not mine. Without more context (which you didn't provide) it's unreasonable to expect me to understand what you mean (you, in fact, advocate I don't 'read stuff' that isn't actually there) outside of replacing actual test and measurement equipment.
I disagree. My phone supports UVC video input so I can plug my $25 standards compliant borescope into it, I can plug my standards compliant rme sound card into it for field recordings, and I have a bunch of input peripherals that just 'surprise work' with android now too.
You should have a look at the GATT spec for bluetooth LE and the UART service. It has never been easier to build scientific devices that rely on your phone for compute. The thing is, I think we're actually at the point where it's cheaper/ more reliable/more predictable to stick an nrf52 chip into a peripheral than try to support a physical connection to your phone - I guess from a security standpoint as much as anything else.
There are a _ton_ of scopes and stuff that sit in the prosumer space that leverage your phone. They're just not wired, and I think it makes sense for them not to be.
> It has never been easier to build scientific devices that rely on your phone for compute.
I think the actual need right now is the other way around, don't build new devices and instruments, rather support existing devices.
"There are a _ton_ of scopes and stuff that sit in the prosumer space that leverage your phone"
Models off phones with these features integrated please?
Where can I find them on, say, GSMArena.com?
I'm not quite sure what you're asking for. Phones that support Bluetooth LE?
Read my other replies, eg the one to david-gpu where I mention Fairphone options.
Typically I wouldn't consider 'read my other replies to understand what I could be talking about' to be a valid or particularly respectful response when it comes to online discussion, but I've read through most of your comments here, and I still can't figure out what the argument you're trying to make is.
My Ulefone has TWO. And the other one is heavy-duty industrial type connector. As luck may have it, I redesigned that connector, because Aliexpress was not selling one without some instrument attached. https://github.com/timonoko/Ulefone-Usmart-connector
Whoever I talk to those says "smartphones are damn too big", and yet no manufacturer produces reasonably-sized phones anymore, because it apparently "they wouldn't sell" / "market size too small".
If this is too difficult, than your use case is orders of magnitude more difficult. Unfortunately everyone tries to sell to the same generic user, with very little actual space for differentiation targeting niches.
People say want smaller phones that fit in their pockets -- myself included. But are they willing to accept the much lower battery life, the smaller text on the screen, and the reduced real estate on the screen?
The smaller iPhones didn't sell well.
My solution is simple, I carry two phones a tiny dumb/feature phone for just phone calls and a smartphone for internet access.
No one can phone me on my smartphone as it uses a data-only SIM so it's difficult for Google and others to make sense of the data they steal from me (even then they only receive data garbage for reasons I won't bother to mention here).
It's hardly less convenient as the dumb phone is small enough to fit in my shirt pocket whereas the smartphone is in a trousers pocket.
Motorola did a sort of add-on thing with their Moto Z some years back. It had swappable back-plates. They launched an extended battery, a speaker, a camera with larger optics and mechanical zoom, a projector attachment... some sort of amazon-branded smart-speaker attachment... they were supposed to kick off an add-on ecosystem but I guess it turned out most people weren't really bothered.
Not quite what you're asking for, it wasn't standard bus connectors or anything.
For any professional interesting sensor use you'd want something that is certified by a trusted third-party. The costs from this easily would overcome any savings you did in hardware.
And also a lot of professional equipment providers are going sort of this way, building portable devices around a OEM android phone.
just a guess, but I've noticed that most app development has concentrated around crossplatform frameworks/languages. If you're writing in JS or Dart or whatever you're not typically going to write a high performance app that has tight integration with an optional peripheral.
As far as I understand the weird ART vs JVM difference also means Java libs are only sometimes going to work (would love to be corrected here though)
< 100 customers, that's why...
although i would buy one.
> Why don't they include a GPIB-like bus to connect to things?
I would _love_ a smartphone with a GPIB connector. Something like https://www.starte-e.net/product/ieee-488-gpib-cn24pin-metal... /s