New survey suggests the vast majority of iPhone and Samsung Galaxy users find AI useless – and I’m not surprised

submitted by

www.techradar.com/phones/new-survey-suggests-th…

A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.

SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.

Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.

From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.

So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.

1.7k

Log in to comment

224 Comments

The only AI thing I use on my Fold is the photo cropping, definitely nifty to just pull out a subject, it's not perfect ofc but way easier then manually trying to cut it out lol.

I was excited to see what it could do on my iPhones but so far I have not liked anything. The notification summaries are useless, for instance.

I do wonder if AI is being used in the background in ways I don’t see, but I doubt it.

If it is you probably wouldn't be thrilled to find out how.

"PLEASE use our hilariously power inefficient wrongness machine."

This is what happens when companies prioritize hype over privacy and try to monetize every innovation. Why pay €1,500 for a phone only to have basic AI features? AI should solve real problems, not be a cash grab.

Imagine if AI actually worked for users:
- Show me all settings to block data sharing and maximize privacy.
- Explain how you optimized my battery last week and how much time it saved.
- Automatically silence spam calls without selling my data to third parties.
- Detect and block apps that secretly drain data or access my microphone.
- Automatically organize my photos by topic without uploading them to the cloud.
- Make everything i could do with TASKER with only just saying it in plain words.

How could you ensure AI to privately sort your pictures, if the requests to analyze your sensitive imagery need to be made on a server? (that based its knowledge of disrespecting others copyright anyway, lol)

Why it must connect to a server to do it? Why can not offline? Deepseek showed us that it is possible. The companies want everyone to think that AI only works online. For example AI image enhancements in my mid range Samsung phone work offline.

oh my bad, sorry im not well versed.

Thats why I asked :p

A lot of people think as a must that AI = permanent server connection. I don't mind if it is a bit slower but part of my device.

Make everything i could do with TASKER with only just saying it in plain words.

Stop, I can only get so hard.

It is absolutely useless for everyday simple tasks I find.

Who the fuck needs AI to SUMMARIZE an EMAIL, GOOGLE?

IT'S FIVE LINES

Get out of my face Gemini!

Yahoo was using their shitty AI tool to summarize emails THEN REPLACE THE FUCKING SUBJECT LINES WITH THE SUMMARY!

It immediately hallucinated raffle winners for a sneaker company and iirc they started getting death threats.

Or the shitty notification summary. If someone wrote something to me, then it’s important enough for me to read it. I don’t need 3 bullet points with distorted info from AI.

It'd be way less offensive if it was just present as an option, instead of dancing around flashing at me

Who the fuck needs AI to SUMMARIZE an EMAIL, GOOGLE?

The executives who don't do any real work, pretend they do (chiefly to themselves), and make ALL of the purchasing decisions despite again not doing any real work.

AI is useless and I block it anyway I can.

"Stop trying to make fetch AI happen. It's not going to happen."

AI is worse that adding no value, it is an actual detriment.

I feel like I'm in those years of You really want a 3d TV, right? Right? 3D is what you've been waiting for, right? all over again, but with a different technology.

It will be VR's turn again next.

I admit I'm really rooting for affordable, real-world, daily-use AR though.

i like 3d, too bad it barely has any content, even back in its time.

AR pretty much will happen, in my opinion as someone who roughly works in the field. It's probably going to be the next smartphone level revolution within two decades

I'm not commenting on whether it would be good or bad for society, especially with our current societal situations and capitalism and stuff, but I'm confident it will happen, either way, and change the word drastically again

I like the idea of AR very much, but for exactly the reasons you stepped around mentioning I'll wait until I can get my hands on something FLOSS. When I'm buying glasses that are running some KDE AR project licensed under the GPL I'll feel like it's trustworthy. :D

Yeah, It'd be ideal for FOSS AR to exist, and I would really love for that kind to be established

I hate that nowadays AI == LLM/chatbot.

I love the AI classifiers that keep me safe from spam or that help me categorise pictures.
I love the AI based translators that allow me to write in virtually any language almost like a real speaker.

What I hate is these super advanced stocastic parrots that manage to pass the Turing test and, so, people assume they think.

I am pretty sure that they asked specifically about LLM/chatbots the percentage of people not caring would be even higher

AI present on Apple and Samsung phones are indeed useless.

They have small language models that summarise notification and rewrite your messages and emails. Those are pretty useless.

Image editing AI that removes unwanted people from your photos have some use.

However top AI tools like deep research, Cursor which millions of developers are using to assist developers with coding are objectively very useful.

Unless it can be a legit personal assistant, I’m not actually interested. Companies hyped AI way too much.

seems like they hype to themselves more than the customers, they tried to force feed.

That's because it is.

Pointless resource hogging bloatware.

please burst that bubble already so i can get a cheap second hand server grade gpu

A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.

It's like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.

I think it largely depends on what kind of AI we're talking about. iOS has had models that let you extract subjects from images for a while now, and that's pretty nifty. Affinity Photo recently got the same feature. Noise cancellation can also be quite useful.

As for LLMs? Fuck off, honestly. My company apparently pays for MS CoPilot, something I only discovered when the garbage popped up the other day. I wrote a few random sentences for it to fix, and the only thing it managed to consistently do was screw the entire text up. Maybe it doesn't handle Swedish? I don't know.

One of the examples I sent to a friend is as follows, but in Swedish;

Microsoft CoPilot is an incredibly poor product. It has a tendency to make up entirely new, nonsensical words, as well as completely mangle the grammar. I really don't understand why we pay for this. It's very disappointing.

And CoPilot was like "yeah, let me fix this for you!"

Microsoft CoPilot is a comedy show without a manuscript. It makes up new nonsense words as though were a word-juggler on circus, and the grammar becomes mang like a bulldzer over a lawn. Why do we pay for this? It is buy a ticket to a show where actosorgets their lines. Entredibly disappointing.

Most AIs struggle with languages other than English, unfortunately, I hate how it reinforces the "defaultness" of English

I guess there's not much non English internet to scrape? I'm always surprised how few social media platforms exist outside of the USA. I went looking because I was curious what discourse online would look like without any Americans talking, and the answer was basically "there aren't any" outside of shit like 2ch.

There are definitely non american social media platforms and groups and stuff, I'm guessing the same thing keeping you from knowing about them is keeping other americans from knowing about them

Maybe but idk what you mean.

I could however use a list if you felt like making one for some rando online.

Deleted by author

 reply
3

That's so beautifully illustrative of what the LLM is actually doing behind the curtain! What a mess.

Yeah, it wonks the tokens up.

I actually really like machine learning. It's been a fun field to follow and play around with for the past decade or so. It's the corpo-facist BS that's completely tainted it.

99.999% accurate would be pretty useful. Theres plenty of misinformation without AI. Nothing and nobody will be perfect.

Trouble is they range from 0-95% accurate depending on the topic and given context while being very confident when they’re wrong.

The problem really isn't the exact percentage, it's the way it behaves.

It's trained to never say no. It's trained to never be unsure. In many cases an answer of "You can't do that" or "I don't know how to do that" would be extremely useful. But, instead, it's like an improv performer always saying "yes, and" then maybe just inventing some bullshit.

I don't know about you guys, but I frequently end up going down rabbit holes where there are literally zero google results matching what I need. What I'm looking for is so specialized that nobody has taken the time to write up an indexable web page on how to do it. And, that's fine. So, I have to take a step back and figure it out for myself. No big deal. But, Google's "helpful" AI will helpfully generate some completely believable bullshit. It's able to take what I'm searching for and match it to something similar and do some search-and-replace function to make it seem like it would work for me.

I'm knowledgeable enough to know that I can just ignore that AI-generated bullshit, but I'm sure there are a lot of other more gullible optimistic people who will take that AI garbage at face value and waste all kinds of time trying to get it working.

To me, the best way to explain LLMs is to say that they're these absolutely amazing devices that can be used to generate movie props. You're directing a movie and you want the hero to pull up a legal document submitted to a US federal court? It can generate one in seconds that would take your writers hours. It's so realistic that you could even have your actors look at it and read from it and it will come across as authentic. It can generate extremely realistic code if you want a hacking scene. It can generate something that looks like a lost Shakespeare play, or an intercept from an alien broadcast, or medical charts that look like exactly what you'd see in a hospital.

But, just like you'd never take a movie prop and try to use it in real life, you should never actually take LLM output at face value. And that's hard, because it's so convincing.

We're not talking about an AI running a nuclear reactor, this article is about AI assistants on a personal phone. 0.001% failure rates for apps on your phone isn't that insane, and generally the only consequence of those failures would be you need to try a slightly different query. Tools like Alexa or Siri mishear user commands probably more than 0.001% of the time, and yet those tools have absolutely caught on for a significant amount of people.

The issue is that the failure rate of AI is high enough that you have to vet the outputs which typically requires about as much work as doing whatever you wanted the AI to do yourself, and using AI for creative things like art or videos is a fun novelty, but isn't something that you're doing regularly and so your phone trying to promote apps that you only want to use once in a blue moon is annoying. If AI were actually so useful you could query it with anything and 99.999% of the time get back exactly what you wanted, AI would absolutely become much more useful.

People love to make these claims.

Nothing is "100% accurate" to begin with. Humans spew constant FUD and outright malicious misinformation. Just do some googling for anything medical, for example.

So either we acknowledge that everything is already "sewage" and this changes nothing or we acknowledge that people already can find value from searching for answers to questions and they just need to apply critical thought toward whether I_Fucked_your_mom_416 on gamefaqs is a valid source or not.

Which gets to my big issue with most of the "AI Assistant" features. They don't source their information. I am all for not needing to remember the magic incantations to restrict my searches to a single site or use boolean operators when I can instead "ask jeeves" as it were. But I still want the citation of where information was pulled from so I can at least skim it.

99.999% would be fantastic.

90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).

What we have now is like...I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?

I haven't used Samsung's stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it's great.

Ideally, I don't ever want to hear an AI's opinion, and I don't ever want information that's baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That's what LLMs are actually good at.

Again: What is the percent "accurate" of an SEO infested blog about why ivermectin will cure all your problems? What is the percent "accurate" of some kid on gamefaqs insisting that you totally can see Lara's tatas if you do this 90 button command? Or even the people who insist that Jimi was talking about wanting to kiss some dude in Purple Haze.

Everyone is hellbent on insisting that AI hallucinates and... it does. You know who else hallucinates? Dumbfucks. And the internet is chock full of them. And guess what LLMs are training on? Its the same reason I always laugh when people talk about how AI can't do feet or hands and ignore the existence of Rob Liefeld or WHY so many cartoon characters only have four fingers.

Like I said: I don't like the AI Assistants that won't tell me where they got information from and it is why I pay for Kagi (they are also AI infested but they put that at higher tiers so I get a better search experience at the tier I pay for). But I 100% use stuff like chatgpt to sift through the ninety bazillion blogs to find me a snippet of a helm chart that I can then deep dive on whether a given function even exists.

But the reality is that people are still benchmarking LLMs against a reality that has never existed. The question shouldn't be "we need this to be 100% accurate and never hallucinate" and instead be "What web pages or resources were used to create this answer" and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

Again: What is the percent “accurate” of an SEO infested blog

I don't think that's a good comparison in context. If Forbes replaced all their bloggers with ChatGPT, that might very well be a net gain. But that's not the use case we're talking about. Nobody goes to Forbes as their first step for information anyway (I mean...I sure hope not...).

The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

Correct.

If we're talking about an AI search summarizer, then the accuracy lies not in how correct the information is in regard to my query, but in how closely the AI summary matches the cited source material. Kagi does this pretty well. Last I checked, Bing and Google did it very badly. Not sure about Samsung.

On top of that, the UX is critically important. In a traditional search engine, the source comes before the content. I can implicitly ignore any results from Forbes blogs. Even Kagi shunts the sources into footnotes. That's not a great UX because it elevates unvetted information above its source. In this context, I think it's fair to consider the quality of the source material as part of the "accuracy", the same way I would when reading Wikipedia. If Wikipedia replaced their editors with ChatGPT, it would most certainly NOT be a net gain.

You know, I was happy to dig through 9yo StackOverflow posts and adapt answers to my needs, because at least those examples did work for somebody. LLMs for me are just glorified autocorrect functions, and I treat them as such.

A colleague of mine had a recent experience with Copilot hallucinating a few Python functions that looked legit, ran without issue and did fuck all. We figured it out on testing, but boy was that a wake up call (colleague in question has what you might call an early adopter mindset).

Perplexity is kinda half-decent with showing its sources, and I do rely on it a lot to get me 50% of the way there, at which point I jump into the suggested sources, do some of my own thinking, and do the other 50% myself.

It's been pretty useful to me so far.

I've realised I don't want complete answers to anything really. Give me a roundabout gist or template, and then tell me where to look for more if I'm interested.

I think you nailed it. In the grand scheme of things, critical thinking is always required.

The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I'm not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren't flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I'll pass.

The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.

Even those examples are the kinds of things that "fall apart" if you actually think things through.

Art? Actual human artists tend to use a ridiculous amount of "AI" these days and have been for well over a decade (probably closer to two, depending on how you define "AI"). Stuff like magic erasers/brushes are inherently looking at the picture around it (training data) and then extrapolating/magicking what it would look like if you didn't have that logo on your shirt and so forth. Same with a lot of weathering techniques/algorithms and so forth.

Same with coding. People more or less understand that anyone who is working on something more complex than a coding exercise is going to be googling a lot (even if it is just that you will never ever remember how to do file i/o in python off the top of your head). So a tool that does exactly that is.... bad?

Which gets back to the reality of things. Much like with writing a business email or organizing a calendar: If a computer program can do your entire job for you... maybe shut the fuck up about that program? Chatgpt et al aren't meant to replace the senior or principle software engineer who is in lots of design meetings or optimizing the critical path of your corporate secret sauce.

It is replacing junior engineers and interns (which is gonna REALLY hurt in ten years but...). Chatgpt hallucinated a nonsense function? That is what CI testing and code review is for. Same as if that intern forgot to commit a file or that rockstar from facebook never ran the test suite.

Of course, the problem there is that the internet is chock full of "rock star coders" who just insist the world would be a better place if they never had to talk to anyone and were always given perfectly formed tickets so they could just put their headphones on and work and ignore Sophie's birthday and never be bothered by someone asking them for help (because, trust me, you ALWAYS want to talk to That Guy about... anything). And they don't realize that they were never actually hot shit and were mostly always doing entry level work.

Personally? I only trust AI to directly write my code for me if it is in an airgapped environment because I will never trust black box code I pulled off the internet to touch corporate data. But I will 100% use it in place of google to get an example of how to do something that I can use for a utility function or adapt to solving my real problem. And, regardless, I will review and test that just as thoroughly as the code Fred in accounting's son wrote because I am the one staying late if we break production.


And just to add on, here is what I told a friend's kid who is an undergrad comp sci:

LLMs are awesome tools. But if the only thing you bring to the table is that you can translate the tickets I assigned to you to a query to chatgpt? Why am I paying you? Why am I not expensing a prompt engineering course on udemy and doing it myself?

Right now? Finding a job is hard but there are a lot of people like me who understand we still need to hire entry level coders to make sure we have staff ready to replace attrition over the next decade (or even five years). But I can only hire so many people and we aren't a charity: If you can't do your job we will drop you the moment we get told to trim our budget.

So use LLMs because they are an incredibly useful tool. But also get involved in design and planning as quickly as possible. You don't want to be the person writing the prompts. You want to be the person figuring out what prompts we need to write.

In short, AI is useful when it's improving workflow efficiency and not much else beyond that. People just unfortunately see it as a replacement for the worker entirely.

If you wanna get loose with your definition of "AI," you can go all the way back to the MS Paint magic wand tool for art. It's simply an algorithm for identifying pixels within a certain color tolerance of each other.

The issue has never been the tool itself, just the way that it's made and/or how companies intend to use it.

Companies want to replace their entire software division, senior engineers included, with ChatGPT or equivalent because it's cheaper, and they don't value the skill of their employees at all. They don't care how often it's wrong, or how much more work the people that they didn't replace have to do to fix what the AI breaks, so long as it's "good enough."

It's the same in art. By the time somebody is working as an artist, they're essentially at a senior software engineer level of technical knowledge and experience. But society doesn't value that skill at all, and has tried to replace it with what is essentially a coding tool trained on code sourced from pirated software and sold on the cheap. A new market of cheap knockoffs on demand.

There's a great story I heard from somebody who works at a movie studio where they tried hiring AI prompters for their art department. At first, things were great. The senior artist could ask the team for concept art of a forest, and the prompters would come back the next day with 15 different pictures of forests while your regular artists might have that many at the end of the week. However, if you said, "I like this one, but give me some versions without the people in them," they'd come back the next day with 15 new pictures of forests, but not the original without the people. They simply could not iterate, only generate new images. They didn't have any of the technical knowledge required to do the job because they depended completely on the AI to do it for them. Needless to say, the studio has put a ban on hiring AI prompters.

For real. If a human performs task X with 80% accuracy, an AI needs to perform the same task with 80.1% accuracy to be a better choice - not 100%. Furthermore, we should consider how much time it would take for a human to perform the task versus an AI. That difference can justify the loss of accuracy. It all depends on the problem you're trying to solve. With that said, it feels like AI on mobile devices hardly solves any problems.

Much like certain other trends like 3D TVs, this helps us see how often "visionaries" at the top of a company are charmed by ideas that no one on the ground is interested in. Same with blockchain, cryptocurrency, and so many other buzzwords.

So maybe I'll mention it again: The Accountable Capitalism Act would require 40% of a company's board be made up of democratically voted employees, who can provide more practical input about how top-level decisions would affect the people working there.

I could actually see 3D TVs taking off, even with the requirement for glasses. At the time, there was a fad for 3D movies in theaters. But, they needed to have gotten with content creators so that there was a reason to own one. There was no content, so no one invested, so probably in a year or two there's going to be some Youtubers making videos of "I finally found Sony's forgotten 3D TV."

I can see why people thought 3d tvs were a great idea, until they actually experienced it for themselves. It also didn't help that so much content wasn't genuinely shot in 3d, either, but altered in post.

I wonder if it has anything to do with the fact that it’s useless.

I don't think it's meant to be useful....for us, that is. Just another tool to control and brainwash people. I already see a segment of the population trust corporate AI as an authority figure in their lives. Now imagine kids growing up with AI and never knowing a world without. People who have memories of times before the internet is a good way to relate/empathize, at least I think so.

How could it not be this way? Algorithms trained people. They're trained to be fed info from the rich and never seek anything out on their own. I'm not really sure if the corps did it on purpose or not, at least at first. Just money pursuit until powerful realizations were made. I look at the declining quality of Google/Youtube search results. As if they're discouraging seeking out information on your own. Subtly pushing the path of least resistance back to the algorithm or now perhaps a potentially much more sinister "AI" LLM chatbot. Or I'm fucking crazy, you tell me.

Like, we say dead internet. Except...nothing is actually stopping us from ditching corporate internet websites and just go back to smaller privately owned or donation run forums.

Big part of why I'm happy to be here on the newfangled fediverse, even if it hasn't exploded in popularity at least it has like-minded people, or you wouldn't be here.

Check out debate boards. Full of morons using ChatGPT to speak for them and they'll both openly admit it and get mad at you for calling it dehumanizing and disrespectful.

/tinfoil hat

Edit to add more old man yells at clouds(ervers) detail, apologies. Kinda chewing through these complex ideas on the fly.

Sometimes I wonder what is going to happen to all this tech in 4 or so years when its less profitable to keep the AI centers on.

Right now they are "free" because of all the investment that is going on. But they have a huge maintenance/energy cost.

They just need to capitalize the surveillance capabilities. Find a way to convince users they need access to everything on their phones in order to sell them first class convenience. Once you've done that there's plenty of money to be made.

dint MS said their AI isnt as profitable, google is sure hellbent on going with AI.

Ai is a waste of time for me; I don't want it on my phone , I don't want it on my computer and I block it every time I have the chance. But I might be old fashioned in that I don't like algorithms recommending anything to me either. I never cared what the all seeing machine has to say.

Not only that, but Google assistant is getting consistently less reliable. Like half the time now I ask it a question and it just does an image search or something or completely misunderstands me in some other manner. They deserted working, decent tech for unreliable, unwanted tech because ???

Profit potential. Think of AI as one big data collector to sell you shit. It is significantly better at learning things about you than any metadata or cookies ever could.

If you think of this AI push as "trying to make a better product" it will not make much sense. If you think of the AI push as "how do I collect more data on all my users and better directly influence their choices" it makes a lot more sense.

I don't think the LLM spouting nonsense responses part actively contributes to collecting and learning about user data much. Regular search queries and other behaviors (click tracking etc) already do this well enough and have most likely been using loads of machine learning for many years now

The point is remove the user "search" experience. Where a user selects options from page results. Yes it is already heavily optimized for directing the user where they want them to go. But AI is even better at it. With every prompt the AI is directing you directly. Its basically turning the Internet into TikTok scroll instead of like YouTube subscriptions or search for example. They want your entire interaction with the web to be through AI and it's interfaces. This is significantly more powerful.

When was the last time you search something on TikTok? You just scroll and like stuff sometimes. Your experience is entirely crafted by TikTok.

This is what they want for AI for your ENTIRE online life. You will watch videos, research, shop, all from an AI that directly influences all of your decisions.

The entire point of websites is keeping you on their app or website for as long as possible. The best way to do that is to in the future direct everything you do from a single app. It's a big reason the US wanted to force China to sell TikTok. US capitalist where having their profits hurt by it.

Well, that's depressing. Where's my Star Trek future?

Star Trek was space communism. So we'd have to kill the capitalist first.

We're heading more towards Star Wars and the Empire. See you in the resistance.

It actually gets in my way every time it does something so that I have stop what I'm doing to kill it. Would love to be able to uninstall it

Not sure if Google Lens counts as AI, but Circle to Search is a cool feature. And on Samsung specifically there is Smart Select that I occasionally use for text extraction, but I suppose it is just OCR.

From Galaxy AI branded features I have tested only Drawing assist which is an image generator. Fooled around for 5 minutes and have not touched it again. I am using Samsung keyboard and I know it has some kind of text generator thing, but have not even bothered myself to try it.

Certainly counts, Samsung has a few features like grabbing text from images that I found useful.

My problem with them is its all online stuff and I'd like that sort of thing to be processed on device but thats just me.

I think folks often are thinking AI is only the crappy image generation or chat bots they get shoved to. AI is used in a lot of different things, only difference is that those implementations like drawing assist or that text grabbing feature are actually useful and are well done.

Not sure if Google Lens counts as AI, but Circle to Search is a cool feature.

Not to the point where it's worth having a button for it permanently taking up space at the bottom of the screen.

On a lot of phones you can hide the navigation pill, but Samsung started forcibly showing it when they added Circle to Search. Fortunately I don't have a Samsung phone.

It's cool

Is it useful? Idk

They're kinda like the S-Pen... is it cool? Sure! Do I find myself using it? No, not really.

Spen is neat in theory but if you have bad handwriting and can't draw it kind of loses the appeal lol.

The first thing I do with a new phone is turn off any kind of assistance.

Not just useless but actively unwelcome.

My kids school just did a survey and part of it included questions about teaching technology with a big focus on the use of AI. My response was "No" full stop. They need to learn how to do traditional research first so that they can spot check the error ridden results generated by AI. Damn it school, get off the bandwagon.

I say this as an education major, and former teacher. That being said, please keep fighting your PTA on this.

We didn't get actually useful information in high school, partially because our parents didn't think there was anything wrong with the curriculum.

I'm absolutely certain that there are multiple subjects that you may have skipped out on, if you'd had any idea that civics, shop, home economics, and maybe accounting were going to be the closest classes to "real world skills that all non collegate educated people still need to know."

I regret not taking shop and home economics. Filing taxes and balancing checkbooks would be good skills to learn also.

I suppose that's exactly what they should be teaching.

And what exactly is the difference between researching shit sources on plain internet and getting the same shit via an AI, except manually it takes 6 hours and with AI it takes 2 minutes?

I think the fact someone would need to explain this to you makes it pointless to try and explain it to you. I can't tell whether you're honestly asking a question or just searching for a debate to attempt to justify your viewpoint.

You're implicating, there are trusted sources, I am saying, there are no trusted sources whatsoever, and you should equally doubt any source. So, who's the one not understanding some principle?

I have Google Gemini turned off on my pixel, because I find that it makes my experience genuinely worse.

I agree. The first thing I always disable is AI, also on the TV.

Out of curiosity, what does AI do on a TV, other than voice recognition?

Sometimes it adds unnecessary things like movie plots or cover for the program on a channel

"useless" is a more positive impression than I have.

A damning result for AI pump and dump scammers.

every NVDA earnings call lol. Old man Jenson had a (chip) farm, AI AI OH! guy literally said AI almost 100 times in a call.

Sounds like corporate right now. Had a meeting earlier and it wasn't even focused on AI, but I heard it enough times to make my ears bleed.

Do I use Gen AI extensively?…

No but, do I find it useful?…..

Also no.

I'm shocked, I tell you. Absolutely shocked. And if you believe that, I got some oceanfront property in Arizona. I'll sell you too.

Apple Intelligence is trash and only lasted 2 days on my 16 pro. Not turning it back on either.

I’m on my iPhone 12 since it came out in sept 2020 (I bought it on Halloween 2020 lol) and apart from battery health being 77%, I have NO reasons to upgrade and even then, I’ll change the battery when it gets to 70% and… that’s it.

Phones just aren’t exciting anymore. I used to watch so much phone reviews on YouTube and now they are all just.. the same. Folding phones aren’t that interesting for me. I saw that there is a new battery technology, but that’s like the only new fun feature I’m interested in.

Most performance upgrades aren’t used in the real world and AI suuuuucks

Nothing bores me more than their events that focus on AI.

"AI" (as in LLMs for the sake of having LLMs accessible on your phone) is so fucking useless...

From a technical standpoint it's pretty cool, I love playing around with Ollama on my PC every now and then.

But the average Joe seems to think it's some magic being with absolute fucking knowledge you can talk to using your phone. Apart from being stupid, I think this might actually endanger human capabilities like critical thinking as well as reasoning and creativity.

So many people use "Chat Jippity" to look up stuff. I know google is enshitificated.. but OH MY GOD.

After having mostly relevant Information available for everyone, the zone was flooded with Advertisement and FakeNews, and now the FakeNews are generated directly on the User's device.. no interaction and connection to anyone necessary.

I think the hate of AI does what you describe more than the actual AI

Yeah, no

Definitely is. It's identical to the fear of immigrants. Like they just recycled the same articles. It's manufactured outrage. It'll either work or it doesn't. We'll find out. Not a big deal either way. Getting my information on this from media created by non tech literature journalists (who feel most threatened by AI) isn't a great move. It's like watching Fox News to stay on top of the latest immigration problems.

Take any articles on AI and put "immigrants" into the headline. It's weird how it's all the same crap. "They're taking our jobs" "they threaten our way of life" "society can't handle the influx of this thing".

can we not compare the media’s complicity in the dehumanization of a group of people currently being targeted by fascist regimes to articles about whether a certain technology is useful or not

Not being critical of the way media spreads propaganda in any form is what allows it to be used to dehumanize in the first place.

It's not a good idea to wait until the house is on fire to install the batteries in the fire alarm

On Samsung they got rid of a perfectly good screenshot tool and replaced it with one that has AI, it's slower, clunky, and not as good, I just want them to revert it. If I wanted AI I'd download an app.

You are thinking about Smart Select? I just take fullscreen screenshot and then crop it if I need part of it. Did it even when I had previous Smart Select version. Overall I think new version with all previous 4 select options bundled in 1 is better.

Yes, Smart Select. I do that now, but taking a full screenshot and cropping it is slower for me than the old Smart Select. I hate this new version, it's slower and doesn't work the same, we should get the option to pick, but they forced the upgrade and I have no choice.

i feel like they dedicate too much phone tech to AI and Photos soley.

I do not need it, and I hate how it's constantly forced upon me.

Current AI feels like the Metaverse. There's no demand for it or need for it, yet they're trying their damndest to shove it into anything and everything like it's a new miracle answer to every problem that doesn't exist yet.

And all I see it doing is making things worse. People use it to write essays in school; that just makes them dumber because they don't have to show they understand the topic they're writing. And considering AI doesn't exactly have a flawless record when it comes to accuracy, relying on it for anything is just not a good idea currently.

If they write essays with it and the teacher is not checking their actual knowledge, the teacher is at fault, not the AI. AI is literally just a tool, like a pen or a ruler in school. Except much much bigger and much much more useful.

It is extremely important to teach children, how to handle AI properly and responsibly or else they will be fucked in the future.

I agree it is a tool, and they should be taught how to use it properly, but I disagree that is like a pen or a ruler. It's more like a GPS or Roomba. Yes, they are tools that can make your life easier, but it's better to learn how to read a map and operate a vacuum or a broom than to be taught to rely on the tool doing the hard work for you.

You are sincerely advocating for teaching how to read a physical map? When will you ever need that ever, without a Zombie apocalypse?

It might be good to teach them this skill additionally, for the sake of brain development. But we should stay in reality and not replace real tools with obsolete ones in education, because children should be prepared for the real world and not for some world, that does not exist (anymore).

Same reason, why I find it ridiculous, how much children are cushioned to the brim and are denied to see the real world for 17 years and ~355 days, in the USA system. As soon as they are 18, they start to see the real world and they are not at all prepared for this surprise.

You are sincerely advocating for teaching how to read a physical map? When will you ever need that ever, without a Zombie apocalypse?

I strongly advocate it, it's a basic skill. Like simple math, reading and writing, being able to balance a budget, cooking, etc, being able to read a map is a necessary basic skill.

Maps aren't obsolete. GPS literally works off of the existence maps. Trying to claim maps are obsolete is like saying that cooking food at home is obsolete because you can order delivery.

Anyone who has been paying attention has been waiting for this enormous bag of shit to explode already.

The AI thing I'd really like is an on-device classifier that decides with reasonably high reliability whether I would want my phone to interrupt me with a given notification or not. I already don't allow useless notifications, but a message from a friend might be a question about something urgent, or a cat picture.

What I don't want is:

  • Ways to make fake photographs
  • Summaries of messages I could just skim the old fashioned way
  • Easier access to LLM chatbots

It seems like those are the main AI features bundled on phones now, and I have no use for any of them.

That's useful AI that doesn't take billions of dollars to train, though. (it's also a great idea and I'd be down for it)

You mean paying money to people to actually program. In fair exchange for their labor and expertise, instead of stealing it from the internet? What are you, a socialist?

/s

As an android user (Pixel), I've only ever opened AI by accident. My work PC is a mac and it force-reenables apple intelligence after every update. I dutifully go into settings and disable that shit. While summarizing things is something AI can be good at, I generally want to actually read the detail of work communications since, as a software engineer, detail is a teeeny bit important.

only corporations ever pushed it, customers do not want it or need it.

I side graded from a iPhone 12 to an Xperia as a toy to tinker around with recently and I disabled Gemini on my phone not long after it let me join the beta.

Everything seemed half baked. Not only were the awnsers meh and it felt like an invasion of privacy after reading to user agreement. Gemini can't even play a song on your phone, or get you directions home, what an absolute joke.

Ironically, on my Xperia 1 VI (which I specifically chose as my daily driver because of all the compromises on flagship phones from other brands) I had the only experience where I actually felt like a smartphone feature based on machine learning helped my experience, even though the Sony phones had practically no marketing with the AI buzzwords at all.

Sony actually trained a machine learning model for automatically identifying face and eye location for human and animal subjects in the built-in camera app, in order to be able to keep the face of your subject in focus at all time regardless how they move around. Allegedly it's a very clever solution trained for identifying skeletal position to in turn identify head and eye positions, it works particularly well for when your subject moves around quickly which is where this is especially helpful.

And it works so incredibly well, wayyyyy better than any face tracking I had on any other smartphone or professional camera, it made it so so much easier for me to take photos and videos of my super active kitten and pet mice lol

That's pretty neat, I think that's a great example of how machine learning being useful for everyday activities. Face detection on cameras has been a big issue ever since the birth of digital photography. I'm using a Japanese 5 III that I picked up for $130 and its been great. I've heard of being able to side load camera apps from other Xperias onto the 5 III so I'll give it a try.

I think Sony makes great hardware and their phones have some classy designs and I'm also a fan of their DSLR'S. I've always admired there phones going back to the Ericson Walkmans, their designs have aged amazingly. I apreciate how close to stock Sony's Xperia phones are, I dont like UI's and bloatware you cant remove. My last Android phone a Galaxy S III was terrible in that regard and put me off from buying another Android until recently. I was actually thinking about getting a 1 VI as my next phone and install lineage on it now that I'm ready to commit.

I totally agree with the AOSP-like ROM and I love it so much too, especially since Sony also makes it super straightforward to root (took me less than 10 minutes) with no artificial function limitations after root (unlike the Samsung models where you can even root at all), so a highly AOSP-like ROM also means a lot of the cook OS customization tools originally developed for Pixel phones, where most of such community development efforts are focused on, tend to mostly work too on th Xperia phones :p

For side-loading Sony native apps from other models, I tried the old pro video recording app from previous gen (the Cinema Pro) on my Xperia 1 VI just for curiosity (since the new unified camera app with all the pro camera and pro video features included in a single app is definitely an usability improvement lol), and it worked fine, so it might work too if you side-load the new camera app onto your older model, feel free to DM me if you're interested to experiment with this and I can try the various methods for exporting that app and send to you.

Although Lineage OS is not yet available for the gen VI model since it only came out in 2024, however the previous gen V model got its first Lineage OS release in around September, 2024, so it might not take that long to get Lineage OS for the gen VI model :D

I bought this 5 III with the expectation of flashing lineage onto it. The hold up is that japanese carriers like Docomo lock the bootloader. There was to unlock the bootloader by using a paid service from a company called Canadian Wireless that send you a service code that unlocks these Japanese Xperia phones. What I didn't know was that Sony killed the servers that send the unlock code back in last June so now I'm stuck on the stock ROM. No biggie though, like you've been saying the stock ROM is close enough to ASOP ROM that its not that big of a deal. Having security updates would be preferable though.

Hopefully by the time I upgrade to the 1 VI
lineage will be available. Until then I'm happy with what I got.

I'm totally interested in trying out the newer camera app. The camera app that comes with the 5 III isn't very good, face detection isny very good and auto mode isnt great. I just haven't gotten around to looking for the APK's from the newer phones so that would save me a lot of time.

Thanks, I'll send you a DM!

Doesn't help that I don't know what this "AI" is supposed to be doing on my phone.
Touch up a few photos on my phone? Ok go ahead, ill turn it off when I want a pure photography experience (or use a DSLR).
Text prediction? Yeah why not.. I mean, is it the little things like that?
So it feels like either these companies dont know how to use "AI" or they dont know how to market it... or more likely they know one way to market it and the marketing department is driving the development. Im sure theres good uses but it seems like they dont want to put in the work and just give us useless ones.

Useless for us, but not for them. They want us to use them like personalised confidante-bots so they can harvest our most intimate data

I recently got apple intelligence on my phone, and i had to google around to see what it really does. i couldn't quite figure it out to be honest. I think it is related to siri somehow (which i have turned off, because why would that be on?) and apparently it could tie into an apple watch (which i don't have), so i eventually concluded that it doesn't do anything as of right now. Might be wrong though.

I only really use AI shit on my work computer (because hooray I have a Copilot license), and its only marginally better than doing searches myself. Its nice when it works because it lets me save time researching things, but I CONSTANTLY have to ask "are you sure that's real?" because it just fucking makes up random command flags based on the prompt.

And its only marginally better because fucking search engines have their head so far up their ass they can see their tonsils. Godsdammit I want working search engines back.

I don’t see how AI can benefit my phone experience.

I use my phone to make phone calls and for text messaging. Where does AI fit in? It doesn’t.

But imagine!!! What if AI could write your text messages for you and convincingly hold phone calls??? Then you wouldn't have to use your phone to interact with human beings at all!!!

~Why does anyone want this?~

It's all just to get more data from you so it can monetized.

It actually made my Google speakers assistant dumber because I think they're trying to merge the 2

Ai sucks and is a waste of humanity’s resources. I hate how everything goes on buzzwords industry trends. This shit needs to stop and just focus on simplicity and reliability. We need to stop trying to sell new things every cycle

by
[deleted]

AI is a bad idea completely and if people cared at all about their privacy they should disable it.

It’s all well and good to say that AI categorises pictures and allows you to search for people and places in them, but how do you think that is accomplished? The AI scan and remembers every picture on your phone. Same with email summaries. It’s reading your emails too.

The ONLY assurance that this data isn’t being sent back to HQ is the companies word that it isn’t. And being closed source we have no possible way of auditing their use of data to see if that’s true.

Do you trust Apple and/or Google? Because you shouldn’t.

Especially now when setting up a new AI capable iPhone or iPad Apple Intelligence is enabled by DEFAULT.

It should be OPT-IN, not opt-out.

All AI can ever really do is invade your privacy and make humans even more stupid than they are already. Now they don’t even have to go to a search engine to look for things. They ask the AI and blindly believe what ever bullshit it regurgitates at them.

AI is dangerous on many levels because soon it will be deciding who gets hired for a new job, who gets seen first in the ER and who gets that heart transplant, and who dies.

With enough RAM and ideally a good GPU you can run smaller models (~8B Parameters) locally on your own device.

I've not found the small models to be good enough to be useful.

AI was never meant for the average person but the average person had to be convinced it was for funding.

It's really pointless to most people, it has its use case. But it was just a hype train everyone got on like a few years ago many did with blockchain, another nice technology but only for certain use cases. I don't want nor need an always on AI to search through my phone and spy on me. I have already had overbearing exes try that. It's actually a big reason I am considering switching to a Pixel 10 as my next phone and just installing Graphene OS and calling it a day as my daily driver.

The consumer-side AI that a handful of multi-billion-dollar companies keep peddling to us is just a way for them to attempt to justify AI to us. Otherwise, it consumes MASSIVE amounts of our energy capacities and is primarily being used in ways that harm us.

And, of course, there's nothing they direct at us that isn't ultimately (and solely) for their benefit--our every use of their AI helps train their models, and eventually it will simply be groups of billionaires competing against one another to form the most powerful model that allows them to dominate us and their competitors.

As long as this technology remains determined by those whose entire existence is organized around domination, it will be a sum harm to all of us. We'd have to free it from their grips to make it meaningful in our daily lives.

Tbf most people have no clue how to use it nor even understand what "AI" even is.

I just taught my mom how to use circle to search and it's a real game changer for her. She can quickly lookup on-screen items (like plants shes reading about) from an image and the on-screen translation is incredible.

Also circle to search gets around link and text copy blocking giving you back the same freedoms you had on a PC.

Personally I'd never go back to a phone without circle to search - its so under-rated and a giant shift in smartphone capabilities.

Its very likely that we'll have full live screen reading assistants in the near future which can perform circle to search like functions and even visual modifications live. It's easy to dismiss this as a gimmick but there's a lot of incredible potential here especially for casual and older users.

Google Lens already did that though, all you need is decent OCR and an image classification model (which is a precursor to the current "AI" hype, but actually useful).

That is still AI though...

Image classification model isn't really "AI" the way it's marketed right now. If Google used an image classification model to give you holiday recommendations or answer general questions, everyone would immediately recognize they use it wrong. But use a token prediction model for purposes totally unrelated to predicting the next token and people are like "ChatGPT is my friend who tells me what to put on pizza and there's nothing strange about that".

It is AI in every sense of the word tho. Maybe you're confusing it with LLM?

Neither LLMs nor ICMs are AI in any sense of the word, is my point. LLMs happen to give the illusion of intelligence because of their language-based nature, but they're not fundamentally different from ICMs.

Id take Bixby back over this forced AI crap.

I mean I wouldn't but you know....

Bixby is the 8th or 9th best kitchen timer I've ever accidentally bought.

Everybody hates AI, and these companies keep trying to push it because they're so desperate for investors. Oh, I want to be a fly on the wall of a meeting room when the bubble finally pops.

Artificial Incompetence

Honestly I can't say I've ever had a reason to use it on my phone.

maybe if it was able to do anything useful (like tell me where specific settings that I can't remember the name of but know what they do are on my phone) people would consider them slightly helpful. But instead of making targeted models that know device specific information the companies insist on making generic models that do almost nothing well.

If the model was properly integrated into the assistant AND the assistant properly integrated into the phone AND the assistant had competent scripting abilities (looking at you Google, filth that broke scripts relying on recursion) then it would probably be helpful for smart home management by being able to correctly answer "are there lights on in rooms I'm not?" and respond with something like "yes, there are 3 lights on. Do you want me to turn them off". But it seems that the companies want their products to fail. Heck if the assistant could even do a simple on device task like "take a one minute video and send it to friend A" or "strobe the flashlight at 70 BPM" or "does epubfile_on_device mention the cheeto in office" or even just know how itis being ran (Gemini when ran from the Google assistant doesn't).

edit: I suppose it might be useful to waste someone else's time.

I love the AI features for photos of my galaxy, but other than that I don't use it

People here like to shit on AI, but it has its use cases. It's nice that I can search for "horse" in Google Photos and get back all pictures of horses and it is also really great for creating small scripts. I, however, do not need a LLM chatbot on my phone and I really don't want it everywhere in every fucking app with a subscription model.

People wouldn't shit on AI if it wasn't needlessly crammed down our throats.

people wouldn't shit on AI if it were actually replacing our jobs without taking our pay and creating a system of resource management free from human greed and error.

The only thing is Google photos did that before AI was installed. Now I have to press two extra buttons to get to the old search method instead of using the new AI because the AI gives me the most bizarre results when I use it.

Exactly. My results with Gemini search are worse every single time

You type "horse" into google pictures and you get a bunch of AI generated pictures of what the model thinks horses look like.

Most of the identification of things like 'horses' falls in line with the identification of things like 'crosswalks' and 'motorcycles'--in other words, the majority of the words associated with particular images in Google maps comes from people like us filling out Captcha, not from AI.

I have never used this bixbi AI

Personally, I am just not going to use the smallest screen I own to do most of the tasks they are pushing AI for. They can keep making them bigger and it’s still just going to be a phone first. If this is what they want then why can’t I just have the Watch and an iPad?

I don’t use the A.I. features on iOS or Android — I have both for developer reasons — but I do like the new Siri animation better than the old one. So, not a total waste of time and money. More of a 99.999% waste of time and money.

Maybe it’s useful for people who work in marketing or whatever. Like you write some copy and you ask it to rewrite it in different tones and send them all to your client to see what vibe they want. But I already include the exact right amount of condescension expected in an email from a developer.

I found AI tools awesome for removing objects in photos or transcribing a conversation. Other than that it's useless because it's not reliable.

AI is not there to be useful for you. It is there to be useful for them. It is a perfect tool for capturing every last little thought you could have and direct to you perfectly on what they can sell you.

It's basically one big way to sell you shit. I promise we will follow the same path as most tech. It'll be useful for some stuff and in this case it's being heavily forced upon us whether we like it or not. Then it's usefulness will be slowly diminished as it's used more heavily to capitalize on your data, thoughts, writings, code, and learn how to suck every last dollar from you whether you're at work or at home.

It's why DeepSeek spent so little and works better. They literally were just focusing on the tech.

All these billions are not just being spent on hardware or better optimized software. They are being spent on finding the best ways to profit from these AI systems. It's why they're being pushed into everything.

You won't have a choice on whether you want to use it or not. It'll soon by the only way to interact with most systems even if it doesn't make sense.

Mark my words. When Google stops standard search on their home page and it's a fucking AI chat bot by default. We are not far off from that.

It's not meant to be useful for you.

DeepSeek cost so little because they were able to use the billions that OpenAI and others spent and fed that into their training. DeepSeek would not exist (or would be a lot more primitive) if it weren't for OpenAI.

That's not how these models work. It's not like OpenAI was sharing all their source code. If anything OpenAI benefits from DeepSeek because they released their entire code.

OpenAI is an ironic name now ever since Microsoft became a majority share holder. They are anything but "open".

Yes, it seems like no one even read the damn user agreement. AI just adds another level to our surveillance state. Its only there to collect information about you and to figure out the inner workings of its users minds to sell ads. Gemini even listens to your conversations if you have the quick access toggle enabled.

I planned to skip this generation, assuming this would be the year of useless ai cramming, even though my phone was getting old. Samsung was so desperate to sell s25s upgrading was essentially less than staying with my current model. Bought it, and turned all that mess off

I used it once. Told it to pretend to be a centaur from Mars and explain how centaur sex works. Pretty fucking funny, but yeah it was a one-off.

....so how does centaur sex work? Don't leave us hanging!

So first off it spoke like a generic fantasy character with neighing here and there, I didn't think centaurs neighed given that they have a human mouth but whatever. It said it's just like horse sex but there's extra intimacy because of the human torsos. It also said something about the "power and wisdom of Mars".

Amazing, I can finally to sleep now. I shall right this in my diary with great enthusiasm!

And how does breast feeding work? Is it from the human tit or the horse tit?

finally somebody asking the real questions.

I deleted the deepseek app, you're gonna have to ask.

Surprise surprise!

At work we deal with valuable information and we gotta be careful what to ask. Probably we'll have a total ban on these things at work.

At home we don't give a fuck what your AI does. I just wanna relax and do nothing for as long as I can. So off load your AI onto a local system that doesn't talk to your server and then we'll talk.

In my office there's one prototype model under testing that nobody uses and does nothing useful. Anything else is actually banned, we handled way too sensitive information. It causes office and outlook to glitch often when it tries to open copilot and get immediately slapped silly to shut up. The blinking blank windows are annoying though. IT had to make an special communication to all staff explaining that it was normal behavior.

AI is useless for most people because it does not solve any problems for day to day people. The most common use is to make their emails sound less angry and frustrated.

AI is useful for tech people, makes reading documentation or learning anything new a million times better. And when the AI does get something wrong, you'll know eventually because what you learned from the AI won't work in real life, which is part of the normal learning process anyways.

It is great as a custom tutor, but other than that it really doesn't make anything of substance by itself.

The fact that I can't trust the AI message to be remotely factual makes that sort of use case pointless to me. If I grep and sift through docs, I'll have better comprehension of what I'm trying to figure out. With AI slop, I just end up having to hunt for what it messed up, without any context, wasting my time and patience.

I started self hosting AI to learn more about it, and I have come to the conclusion that it really depends on the AI if its bad or not.

For instance, Google's AI results and just literal dog shit. It is just so factually bad its incredible even. Microsoft also sucks. And this is why everyone doesn't like AI. The two most common ways people see AI (Google search and Windows 11) is just complete horse shit. They should not roll them out, absolutely disastrous decision on their part all because they were feeling FOMO. For example, I asked microsoft's AI if the 'New' outlook had this feature from 'classic' outlook. It said it did. An hour later, I found it didn't actually have that feature. Fucking ridiculous that Microsoft's own AI doesn't know their own software. Embarrassing. Did they not give it their own documentation?

But 'dedicated' AI like ChatGPT and Deepseek I can trust to be factual with a 95% success rate. Current events is it's worse subject.

I really recommend watching this introduction by Andrej Karpathy https://www.youtube.com/watch?v=7xTGNNLPyMI

One part that really stuck with me is that the data in the model is more like a fading memory but the stuff in the context window is more like the working memory. Since I learned that I tend to put as much information as possible into the context window before asking questions about it. This improved the results drastically and reduced hallucinations.

I need AI summaries a lot less than I would prefer a smart mail filter to actually remove all the spam email and texts.

I hate it 🤷 I keep it turned off anywhere that I can.

The only thing I want AI (on my phone) to do is limit my notifications and make calendar events for me. I don't want to ask questions. I don't want to start conversations.

I want to open my phone and have 1 summary notification of things I received and things to do. I want the spammy ones to just be auto filtered because I never click on them.

I'd also love if I could choose when to manage all of these notifications with my AI assistant. The only back and forth I'd like is around scheduling if I need to make changes.

I use chatgpt for things like debugging error codes but I have to be explicit with as much detail as possible or it will give me all sorts of inapplicable crap

Can it generate weird porn locally?

ive seen some weird ai shit floating around, so yea probably

You can do that all by yourself, no AI needed!

by
[deleted]

I want a voice assistant that can set timers for me and search the internet maybe play music from an app I select. I only ever use it when I am cooking something and don't have my hands free to do those things.

Generative AI is peaking in it's ability to produce cringe boomer memes from a prompt. Everything else.. MEH.

Yeah but the amount of energy these auto correct search bars use is absolutely insane and disgusting and people are going without because of it, and literally given the study, most people don’t use it regular. It’s a cool novel tool, but really it’s just fancy google.

The only Galaxy AI feature I find even a bit amusing is Portrait Studio, which can turn a photo of someone into an AI generated comic or 3D picture. But only as long as it remains free, it's not something worth paying for.

Outside of some education and medical scenarios, I have yet to hear of any truly useful AI.

even for the above it isnt useful, at least For professors have been abusing because they are too lazy to check someones writing, and found the AI have mistakenly assuming the paper is been written by AI. Medical would be just as problematic, it would be wierd if they are using it to make a diagnostic, without discerning, ruling other diseases with similar symptoms or results.

Am I crazy? I’ve got this thing writing code and listing website listings. I ask it certain things before Google and just have it give me the source. I use it to sum up huge documents to quickly analyze them before I go through them. Feels like how Google felt I when it first came out. Yall using the same ai?

(Apple ai is not what I’m talking about)

You're not crazy. AI is an useful tool I use daily for quickly summarizing things and for writing code that would otherwise be tedious as hell. I also use it for tips on certain issues in code for learning.

I've asked bing gpt to find me 4k laptops and it proceeded to list 5 laptops that weren't 4k. Asked for the heaviest Pokemon and it responded wailord which has never been correct. Had gpt (not bing) attempt to write an AHK script for me to have forwards and backwards media keys, it failed. I asked it to fix it, it said what was broken, why it didn't work and then fixed it by giving me the exact code that didnt work the first time.

It's consistently wrong to me so i now just skip it because if I haven't to double check everything it says anyway, I might as well just do the research myself.

heaviest

i just asked gemini that, and is listed celesteele or what and another with the same weight. i checked this link

https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_weight#903.0_lbs.to_2204.4_lbs.(409.6_kg_to_999.9_kg)

and it seems to check out. weird it would screw that up for you...

Time moves on and they fix things. I presume they also give the accurate number of 'r's in strawberry too, and (hopefully) stopped recommending glue on pizza.

Bold and controversial choice for pizza topping, but noone complained yet

/s

I also use gen AI for coding assistance and have had an extremely positive experience, but I almost never use it on my smartphone

Ye, students are currently one of the few major benefactors of LLMs lol.

Not sure students are necessarily benefiting? The point of education isn't to hand in completed assignments. Although my wife swears that the Duolingo AI is genuinely helping her with learning French so I guess maybe, depending on how it's being used

Yeah? Well I fucking love it on my iPhone. It's summaries have been amazing, almost prescient. No, Siri hasn't turned my phone into a Holodeck yet but I'm okay with that.

It is, for everybody mostly.

by
[deleted]

Just look at Smart Speakers. Basically the early AI at home. People just used them to set timers and ask about the weather. Even though it was capable of much more. Google and others were unable to monetize them for this reason and have mostly given up.
(Protip: if you have a google speaker and kids, ask about the animal of the day. It's an addition during COVID times for kids learning at home.)

But people also aren't used to AI yet. Most will still google for something, some already skip that step and have ChatGPT search and summarize. I would not be surprised if the internet of the future is just plain text files for the AI agents to scrape.

It's possible that people don't realize what is AI and what is an AI marketing speak out there nowadays.

For a fully automated Her-like experience, or Ironman style Jarvis? That would be rad. But we have not really close to that at all. It sort of exists with LLM chat, but the implementation on phones is not even close to being there.

I’m a software engineer and GitHub Copilot as an AI pair programmer has vastly improved my productivity. Also, I use ChatGPT extensively to help with miscellaneous stuff. Apart from these two, I don’t really find other AI implementations useful.

Repititive task scaling, nothing more. No high quality expectations either

I’m a software engineer and GitHub Copilot as an AI pair programmer has vastly improved my productivity

lol

I think the article is missing the point on two levels.

First is the significance of this data, or rather lack of significance. The internet existed for 20-some years before the majority of people felt they had a use for it. AI is similarly in a finding-its-feet phase where we know it will change the world but haven't quite figured out the details. After a period of increased integration into our lives it will reach a tipping point where it gains wider usage, and we're already very close to that.

Also they are missing what I would consider the two main reasons people don't use it yet.

First, many people just don't know what to do with it (as was the case with the early internet). The knowledge/imagination/interface/tools aren't mature enough so it just seems like a lot of effort for minimal benefits. And if the people around you aren't using it, you probably don't feel the need.

Second reason is that the thought of it makes people uncomfortable or downright scared. Quite possibly with good reason. But even if it all works out well in the end, what we're looking at is something that will drive the pace of change beyond what human nature can easily deal with. That's already a problem in the modern world but we aint seen nothing yet. The future looks impossible to anticipate, and that's scary. Not engaging with AI is arguably just hiding your head in the sand, but maybe that beats contemplating an existential terror that you're powerless to stop.

I like the idea of generating emojis with Ai on phones. All other use cases that apple has presented seem useless to me. I was really hoping it would be something, anything, but it was just underwhelming. And then apple didnt even have it ready for the iphone 16 at launch but said the phone was built for apple intelligence..? Seems kinda rushed and half baked to me. I also like using copilot is vscode. Its proven to be pretty good at helping me debug

It would have to have a 'use' to qualify as anything else. It takes longer to ask it to do anything than it does to just do it yourself. Plus they want you to call it up by their retard brand name, 'hey, gemini' or 'okay, google' is cringey AF.

I cant wait until you get dumb siri for free but it only tells time and the paid version cost 25 a month but it also sets alarms.

Downvoted for casual use of a slur.

If you're talking about retard, what would you prefer i use and how long until that word becomes a slur? You know it wasn't long ago retard was the polite term, and mongoloid before that. It doesn't matter what word you use, if the meaning has negative connotations, some asshole like you decides to take their turn at policing speech to the benifit of nobody.

In any case, I think you're you're wasting your time.

Deleted by moderator

 reply
5

Deleted by moderator

 reply
-1

It became a slur back when I was a child in the 90s because people used it as a general perjorative. Doesn't help that it once innocently described a vulnerable minority. When cunts like you decided to use it as a slur, they tied said vulnerable minority to the concept of "this thing is bad" and harmed that community.

I'm not policing your speech. I'm calling you a cunt for using a decidedly shitty term that's been shitty for decades.

Great, well I dont buy any of your arguments as genuine as they might seem, i think you just like feeling superior. So enjoy knowing i dont care.

Never thought you would. The comment wasn't really for you.

hurry up and get Siri integrated with chatGPT and it'll be a lot more useful.