Why AI Owners and AI Users Have Conflicting Interests?

If you make and commercialize high quality #AI, you are also likely to have conflicting interests with users. Here’s why. #MachineLearning #AIeconomics #incentives #economics pic.twitter.com/J8oorXo1dh
— ivanjureta (@ivanjureta) February 22, 2018
Just like l’art pour l’art, or art for the sake of art was the bohemian creed in the 19th century, it looks like there’s an “AI for the sake of AI” creed now when building general-purpose AI systems based on Large Language Models. Let’s say that the aim for a sustainable business are happy, paying,…
The short answer: careers that reward creative problem solving in domains with scarce knowledge. Let’s unpack that.
I wrote in another note (here) that AI cannot decide autonomously because it does not have self-made preferences. I argued that its preferences are always a reflection of those that its designers wanted it to exhibit, or that reflect patterns in training data. The irony with this argument is that if an AI is making…
We should reduce the cost of authorship and create an incentive mechanism that generates and assigns credibility to authors in a community.
Being entitled to make decisions carries with it the responsibility for outcomes of actions that the decisions led to. Accountability can be implemented through decision governance by defining responsibilities for outcomes of decisions. The idea that decision responsibilities are the counterpart to decision rights is easy to understand. However, defining useful decision responsibilities involves finding…