Quality Assurance for AI: An Inevitable Tradeoff

How do you ensure #quality of a #service which uses #AI for #personalization? #QA in that case is all about risk management. pic.twitter.com/IdKcY5noBL
— ivanjureta (@ivanjureta) January 31, 2018
There are, roughly speaking, three problems to solve for an Artificial Intelligence system to comply with AI regulations in China (see the note here) and likely future regulation in the USA (see the notes on the Algorithmic Accountability Act, starting here): Using available, large-scale crawled web/Internet data is a low-cost (it’s all relative) approach to…
If competence shortens learning, then its value is proportional to the cost of learning, that is, of iterations that would have been needed to achieve the effects of competence, but without having access to it.
I use “depth of expertise” as a data quality dimension of AI training datasets. It describes how much a dataset reflects of expertise in a knowledge domain. This is not a common data quality dimension used in other contexts, and I haven’t seen it as such in discussions of, say, quality of data used for…
In April 2023, the Cyberspace Administration of China released a draft Regulation for Generative Artificial Intelligence Services. The note below continues the previous one related to the same regulation, here. One of the requirements on Generative AI is that the authenticity, accuracy, objectivity, and diversity of the data can be guaranteed. My intent below is…
This text follows my notes on Sections 1 and 2 of the the Algorithmic Accountability Act (2022 and 2023). When (if?) the Act becomes law, it will apply across all kinds of software products, or more generally, products and services which rely in any way on algorithms to support decision making. This makes it necessary…