Check out my digital garden: The Missing Premise.
I absolutely agree with you. That is the internet platform business model after all.
Still though, OpenAI and Google, I think, have a legitimate argument that LLMs without limitation may be socially harmful.
That doesn’t mean a $20 subscription is the one and only means of addressing that problem though.
In other words, I think we can take OpenAI and Google at face value without also saying their business model is the best way to solve the problem.
Companies like OpenAI and Google believe that this technology is so powerful, they can release it to the public only in the form of an online chatbot after spending months applying digital guardrails that prevent it from spewing disinformation, hate speech and other toxic material.
Google Bard is currently free to use for now, so the danger is not locking up tech behind a subscription (though Google will 100% do that eventually).
This is a good question.
Open Empathic’s answer is that because AI is becoming more embedded in our lives, an “understanding” of emotions on AI’s part will help people in variety of ways, both within and outside of industries like healthcare, education, and, of course, general commercial endeavors. As far as they’re concerned, AI is a tool that will help encourage “ethical” human decision-making.
On the other hand…we have a ton of different ethical theories and industries ignore them wholesale to make profits. To me, this looks like your standard grade techno bro hubris. They intend to use “disruptive” technology to “revolutionize” whatever. The exploitative profit-making social hierarchy isn’t being challenged. The Hollywood’s writer strikes have just begun, for example. Once Open Empathic starts making breakthroughs in artificial emotional intelligence, the strikes will return and be even more prolonged, if not broken altogether.
I’d answer your question with people who care about other people should be deeply concerned.
Even without a focus on empathy, ChatGPT’s responses in a healthcare setting were rated as more empathic. At best, empathic AI is used to teach people how to be more empathetic to other humans, eventually needing it less and less over time. Far more likely is that human communication becomes mediated through empathic AI (and some company makes a lot of money off the platform of mediation) and the quality of face-to-face human interaction deteriorates.
The structures that make this technology evil here are very well understood, and they matter much more than the fairly banal language we’re using to describe the tech.
Conversely, the fairly banal language used to describe the tech is how the structures that make technology evil are concealed.
Calling humans human rather than objects, even if object detection is what AI does, re-instills certain objects with a whole host of features that distinguishes them from other objects. It won’t matter for the AI, obviously. But it will matter for the people involved with creating and using it.
I mean, imagine if Tesla shows “Object Identified” as it barrels over a misplaced jaywalker. My previous sentence buries the horror of someone being murdered. Similarly, humans are understood to have rights, thoughts, feelings, whole worlds that exist\ inside of their heads, and they exist within a social ecosystem where their presence is fundamental to its health. “Object” capture none of that. But identifying human objects as human does.
Relabeling human objects as human reintroduces all the associated values of being human into AI object detection discussions. And so it becomes easier to see how the evils of technology are acting on us rather than concealing it.
Anecdotally, this was my experience as a student when I tried to use AI to summarize and outline textbook content. The result says almost always incomplete such that I’d have to have already read the chapter to include what the model missed.