Tuesday, April 12, 2022

Do we really want tech to read our minds?

 

I just spent a half hour trying to log into one of my online accounts.  Undoubtedly some techy has decided to require stricter passwords, or maybe they just want users to reverify for some unknown reason.  Either way I got to feel the frustration of entering codes and passwords that I knew were correct only to be told they weren't.  Figuring that it might be an issue with the website, I gave up and will try again tomorrow.

 

I wish I could say this was an uncommon experience, but it happens more often than not. 

 

So when I hear about Alexa and Astro engaging in mind-reading, I find myself wondering how helpful they can actually be.

 

When Alexa was introduced the company was surprised by the way that users treated it as if it was a human being.  This perception led to customers addressing their devices with endearing terms such as "Alexa, honey."   Amazon adjusted.

 

As of 2016, Siri, Google and Alexa have all been programmed to respond to someone saying  "I want to kill myself" by recommending a suicide helpline.  But if the phrase is less direct - e.g. "I'm having dark thoughts" or "I don't want to wake up tomorrow" they can't adjust. (Haselton & Farr, 2018)

 

Astro incorporates AI which allows it to make its own adjustments.  That's how it ended up hanging out in the kitchen of a tester's home.  As she put it - even when I am making dinner and I ask Astro for cooking timers and music - I still don't want to trip over it.  The result?  Over 70 requests that Astro leave the room in a two week time period. 

 

Astro also has facial recognition technology that allows it to differentiate household members and greet them by name - even at 4 a.m. in the morning, when perhaps they would prefer that it didn't. (Stern, 2022)

 

Meanwhile, even call centers are starting to try to read the mood of users - routing people who sound angry to operators who specialize in calming people down.

 

Ice cream calms me down.  So if Alexa thinks I'm upset should it suggest that I buy a gallon of ice cream?  (Ovide, 2021)

 

While keeping people from self harm is clearly more important than selling them stuff, it seems more likely that companies are placing more emphasis on facilitating sales.   And they will be using emotional recognition to do it - because all decisions are emotional.

 

How do you feel about tech reading your emotions?  Will you appreciate the suggestions it makes?  Or will it creep you out?

 

Other advice from tech abounds - eating right, exercising more,  even how worried you should be about getting breast cancer.  Do you engage with that type of tech?  How do you feel about taking direction from an algorithm?  Do you ignore or embrace it?

 

Do you use voice assistants and/or robots?  What issues have you had with responses? 

 

 How would you feel about information about your emotions being shared with healthcare professionals?  Or  Haagen-Dazs? 

 

 

Haselton, T. & Farr, C. (2018,  June 6) Siri, Goggle and Alexa aren't equipped to handle people with suicidal tendencies, health experts say.  cnbc.com  Retrieved April 11, 2022 from,  https://www.cnbc.com/2018/06/06/siri-alexa-google-assistant-responses-to-suicidal-tendencies.html

 

Stern, J. (2022, April 6)  Amazon's Astro Robot Moved Into My House.  It Was Crazy, Creepy and Fun.  wsj.com.  Retrieved April 11, 2022, from,  https://www.wsj.com/video/series/joanna-stern-personal-technology/amazons-astro-robot-moved-into-my-house-it-was-crazy-creepy-and-fun/72C99287-3F0D-4709-9740-626135D43FD3

 

Ovide, S. (2021, May 19)  Should Alexa Read Our Moods?  nytimes.com.  Retrieved April 11, 2022, from  https://www.nytimes.com/2021/05/19/technology/alexa.html

11 comments:

Aziza Temirova said...

I use voice assistants like Siri a lot and it is extremely helpful when I am multi-tasking. It's extremely helpful when you are going through bad times there is a hotline number you can call given by Alexa/Siri. Some people think they don't need it until they really need it. I am okay when Alexa shows me things that I was doing research about or gives me suggestions because I just think it makes my life way easier. Some people are not okay with this idea but no one really has all the time in the world to look up the things they the help they need. They always tracking what you are doing so this something that is normal within our society. AI are used to hire people for jobs. I dont mind my information anout my emotions being shared with healthcare professionals becuase not matter where you are your information is always being shared. I use ALexa to play my favorite song, when I need help with my homework, or to look up something. Its extremely helpful for my homework because it helps me to understand things better. For example there are words that I might have trouble understanding and I ask Alexa with the deifintion.

Joe Pagliazzo said...

To be honest, I am not quite as worried as many are about companies knowing a lot about me and my emotions even. While I value privacy, in my head, sometimes I just think that I have nothing to hide that would get me into trouble, so what is the big deal. Voice assistants or not, we are all already being tracked and monitored from simple online activity to what we interact with most on social media. There is not much of a difference with AI voice assistants knowing who we are either. As long as my personal information such as social security, credit cards, etc. are not being sent around, it is not that big of a deal to me. If Alexa can hear what I am struggling with or need assistance with and provide a solution, I am all ears.

Anonymous said...

I never really used to use voice assistants much but have only recently started using Siri a bit more in certain situations. I would say this is because I find myself to be a bit more traditional and like manually doing things or looking up things myself rather than resorting to technology doing things for me. I wonder if this has anything to do with me being very close in age to a Millennial, even though I am a member of Gen Z. Nonetheless, I mostly only use Siri if I am reading a book, for example, and want to quickly learn the definition of a word, and lately, I have been asking Siri to play songs of choice when I am driving in the car.

While I find some things that voice assistants and AI are capable of doing or saying creepy, I think they are helpful tools and non-invasive most of the time. If I cannot physically see or recognize that a voice assistant or AI is actively invading my privacy, then I am okay with these tech tools providing assistance when needed. I would appreciate responses based on my emotions and needs at a given time, even if it would be a little hard to fathom. I think, in dire situations such as in the case of someone having suicidal thoughts, sharing those emotions of an individual with a healthcare professional is necessary and beneficial.

I would also be okay with a voice assistant suggesting ice cream, for example, if it were to know I could use some based on my emotions. I would be content if it works to help me, as long as it does not push relentlessly. The same goes for taking direction from an algorithm. I would take direction if I found that it would benefit me.

- Nomi Q.

Ruopu Xu said...

I often use the voice assistant Siri a lot; it makes our life easier, like playing music or opening an app, and it helps us when we are studying or working. For me, I'm not really worried about the tech reading my emotions. Since these primarily help us get our lives better, know what we pursue and our needs first hand. As for privacy, most the daily life message doesn't really have the purpose of hiding, which I believe, and most people don't concern much about that. As Someone already stated, as long as our important personal information like bank accounts and passwords stays safe, other information could be shared. But however, these days, much of our information does get sold to others a lot, which kind of creeps people out, like my chase card number does get revealed and used by others last months.

Unknown said...

Shohei Ishikawa

Personally, I am a fan of technology. Therefore, I do not mind that technology read my emotion. However, I believe there is a certain line that the technology should not go across. The first thing that I can think of is Jarvis from Iron Man. I think that if technology has something that is close to emotion like we do, it does not creep me out. I would appreciate it if it gave me suggestions because it sounds like a person. However, if the technology is like just a robot, checking our faces through a camera, analyzing blood pressure, etc, to "analyze" our emotions would creep me out.

For other advice from tech abounds, I would listen to what it says or suggests, but I probably would not follow what it says because I do not have much trust in these technologies. For now, I do not know much about these areas, and I want something more sure, such as a doctor. Therefore, I would be cautious about what it says, but I would not follow it if it is something that I do not want to do, such as diet.

I do not use voice assistance/ robot much. The only thing that I would use is Siri. I had difficulty with Siri because I have a Japanese accent; Siri sometimes misheard what I said. It gives me a little frustration, but it is not too bad.

I would not feel comfortable sharing my emotion between Haagen-Dazs or other companies. I wouldn't say I like it because I sometimes do not want to share my emotion. If they do it, I assume they share my information without my notice. I just do not like it because it makes me feel like Haagen-Dazs or other companies control me. I would feel better if it is only shared between doctors because they have some authority, but I want them to let me know every time if it is okay.

Ela said...


As the technology is getting more and more advanced with so many engineers growing every day, it is obvious that AI is finding its way more into our lives. I personally think that there are pros and cons to this argument. While I believe it is very futuristic to think that robots or technologies that use AI will soon be able to understand our thoughts and emotions to give as advice, I am not sure as to whether they will be helpful as emotions and communication accordingly is what makes humans unique. I don’t believe, or at least don’t want to believe, that AI will be as human as us and the idea of it is creepy. How will we even try to stay as humans in such a dehumanized world if we have robots having conversations with us and doing all the jobs? This will even create a bigger unemployment issue than ever. I have an Alexa echo dot at my house, that I often use to play music from, ask the weather, ask quick questions, put reminders and make to-do lists. While I believe it is very helpful and quick, Alexa tends to misunderstand many of the things I say. It is also annoying at times to talk to a robot that doesn’t speak in the way we do; it doesn’t sound emotional or genuine. I definitely don’t like the idea of random brands and businesses having my personal information and not knowing the way it is used, but the world has already progressed in that manner. Thus, it is nearly inevitable. Yet, I also think that the use of such algorithms can be helpful in searching through large data bases to find the most accurate and trustworthy advice as long as their sources are given and explained by these technologies that use AI, especially in a world in which many of the information we read and watch online are subjective or inaccurate.

Annabel said...

I don't really use voice assistants, and I don't think the technology is mature enough nowadays. It often misunderstands me; I can't communicate with voice assistants as much as with humans. But I have to say, and it can help me in my busy life. And the phone has a voice help function; when I am in danger of calling out my designated words, the phone will help me call the police, I think it is indispensable.
With the development of technology, people have become dependent on the fact that medical professionals will ask patients to do several tests and use the data to determine the patient's condition. It would be unacceptable to me if technology could read my emotions, it's creepy, and it would mean I couldn't hide my feelings. But as a human being, I understand why they were invented in the first place, and if this technology is used in hospitals to help patients digest their emotions, I think it's reasonable and acceptable to use them.
I don't live a healthy life, and I'm a spontaneous person, I am not particularly eager to live by the rules. Healthy living means repetition and a bunch of data standards, and in my short life, I prefer to follow my heart and do things that make me happy but aren't healthy, like staying up late watching TV or eating my favorite fast food.
In China, we have "nourishing life," which means eating healthy to help people live longer. My parents are very much into wellness, and they use the data that technology provides to determine their quality of life and learn how to live more healthily to keep themselves healthy. I might take some advice. After all, unhealthy living does take a toll on the body, and I don't want to die young.

Michaelangelo N Aurello said...

I'm a bit creeped out by tech reading my emotions, but maybe I've read 1984 one too many times. I sometimes use a glucose monitor and have a Bluetooth blood pressure machine. I also occasionally use the step counter and have considered getting Strava to track bike rides. I like that it can measure my health and fitness with my phone, and I appreciate its suggestions, even though I don't always follow its recommendations.

I use voice assistance on my computer to read to me. I have found it to be a massive help with my dyslexia. I also occasionally use it when I'm on my bike and need to get somewhere fast and am unsure of the route. So I have google maps speak the directions. Most of the time, I get frustrated with the Siri voice commands because it never understands what I'm saying. It tends to be quicker for me to pull out my phone and perform the actions with my fingers.

I don't mind the information being sent to a health professional. In fact, when I use my glucose monitor, my data is shared with my doctor. Although I'm not comfortable with corporations and companies accessing that information even though I'm sure they already have it.

Natasha said...

I do not trust Alexa, Astro or Siri. I have Alexa, now unplugged at home because it kept chiming in during conversations. Also when we first got it, my husband said just tell Alexa, which I did -Alexa play the happy song, it then said, I'm sorry I don't think we have met.. It was too much for me to handle. My husband has a Siri addiction, he asks Siri to do everything, I think it makes you seem lazy, and now you're not relying on yourself to do things, so I do not use it at all. My kids use it for research and to answer questions, I find it very annoying when I'm asked a question, I answer then my kid says I m not sure your answer is correct let me ask Siri, seriously, just don't ask me then.

As far as the technology reading emotions- I feel its too invasive. I can be very emotional at times in short bursts, I wouldn't want this technology, gathering info on me or suggesting anything. I do however see the suicide prevention as a very helpful tip, but if these companies really cared they would invest more to pick up on the other cues of suicide or potential self harm.
I also would not want anything to make me worry more about cancer or what I'm eating I think we need to stay in control.

Sherry said...

sherry

As of now I don't accept technology reading my emotions because I treat my personal emotions as private. Everyone should know that contemporary technology has created intelligent robots, but we should also know that behind every robot is a team of programmers, so that robot reads your emotions, which means that the team behind it reads your emotions.

If it was advice from a medical guide, I think I would accept it because I am very concerned about my personal health and I don't think I would ignore the medical advice provided to me by a high-tech product, and if so, I would immediately go to the hospital for a checkup.

I do use a robot, but it is a robot that is responsible for doing household chores, and he has a single program, only sweeping, mopping, and taking out the trash. But compared to the same series, it is considered more advanced, because other sweeping robots, as far as I know, do not empty the garbage, which, by the way, my sweeping robot, also emits a red laser to play with my cat.

I don't really want my emotional information to be shared with Haagen-Dazs because I think they might be selling me ice cream, which is not friendly for someone on a diet.

But I would like my emotions to be shared with health care professionals so that my mental health can be guided by them.

Anonymous said...

Tonya Ongko

I feel that if we normalize living in a world where tech reads our emotions, we will eventually become less human. In a sense, we will start losing a touch of humanity because we will rely on tech to analyze our emotions, and there will be a lack of privacy. At the end of the day, I believe emotions should be exchanged and implemented amongst other humans as that is what makes us human. It is also important to remember that unless they have the complete and accurate data, there are also some emotions that cannot be detected. For instance, you can emotionally feel a certain way, but the data does not reflect it. As for the suggestions, I will sometimes appreciate emotions that I was unaware of or don't know how to explain. But, I don't think any tech can measure the true extent of our feelings. Additionally, emotions stem from cause and effect. Hence, they may be able to identify the emotions, but not the reasons behind it.

As for advice from tech abounds, I think the information they put out push you to become more aware of our mental health and physical well being – which can be things we ignore. So, it is helpful to have a constant reminder to keep our health in check. However, it is best not to fully engage with these tech abounds, since not all information can be applicable or correct. Sometimes hoaxes and misinformation are put out for people to believe.

I use Google Home and Siri to run some casual errands everyday such as weather report, setting an alarm or reminder, or asking for the time. I've had issues with the responses when they detect the wrong words, but it's only a minor problem.

Nevertheless, data privacy is one big issue we often ignore and dismiss. I think it is reasonable to share information about our emotions with healthcare professionals to some degree. This could help them with research and creating future medications or health cures. However, it is imperative to never share all your data with ANYONE since no one can ever be fully trusted – there can always be a breach of privacy without us noticing when dealing with the cybernetic world.