Quality Assurance for AI: An Inevitable Tradeoff

How do you ensure #quality of a #service which uses #AI for #personalization? #QA in that case is all about risk management. pic.twitter.com/IdKcY5noBL
— ivanjureta (@ivanjureta) January 31, 2018
Artificial Intelligence, if incorrectly defined, is even more confusing than it can be. Sometimes, it is considered a technology, which itself is problematic: is it a technology on par with database management systems, for example, which are neutral with respect to the data they are implemented to manage in their specific instances? Or, is it…
Figures 1 and 2 show cost versus time; Figure 1 shows long iterations, Figure 2 short iterations. We choose to do something at time zero, at the origin of the graph in the Figure, and when we do so, we do it under some assumptions that we made at that time. Dashed red lines convey…
In a previous note, here, I wrote that one of the requirements for Generative AI products/services in China is that if it uses data that contains personal information, the consent of the holder of the personal information needs to be obtained. It seems self-evident that this needs to be a requirement. It is also not…
There are, roughly speaking, three problems to solve for an Artificial Intelligence system to comply with AI regulations in China (see the note here) and likely future regulation in the USA (see the notes on the Algorithmic Accountability Act, starting here): Using available, large-scale crawled web/Internet data is a low-cost (it’s all relative) approach to…