The 'Trusting AI' project by IBM Research aims to improve the tech giant's predictive technology tools like IBM Watson by ensuring fairness, inclusivity and accountability.
In addition to tools that are built for performance, IBM Research believes that future AI needs to be centered around trust and that trust can only be achieved with consistent evaluation and tweaking. Trusting AI is part of this plan and represents how IBM is working to ensure its technology is fair and inclusive while staying accountable when improvements can be made.
"As AI advances, and humans and AI systems increasingly work together, it is essential that we trust the output of these systems to inform our decisions. Alongside policy considerations and business efforts, science has a central role to play: developing and applying tools to wire AI systems for trust. IBM Research’s comprehensive strategy addresses multiple dimensions of trust to enable AI solutions that inspire confidence," says the brand on its website.
Anti-Bias Tech Brand Initiatives
The IBM Research 'Trusting AI' Project Aims to Advance its Tools
Trend Themes
1. Trusting AI - AI needs to be centered around trust and evaluation in order to ensure fairness, inclusivity and accountability.
2. Fair and Inclusive Technology - Technology must be built with fairness and inclusivity in mind in order to truly advance society.
3. Accountability in Tech - Tech companies must have accountability measures in place for their products in order to trust them and inspire confidence in users.
Industry Implications
1. AI Research - The AI industry must prioritize building trust into their research and development processes.
2. Tech Consulting - Consulting firms can offer services to tech companies to evaluate and improve the fairness and inclusivity of their products.
3. Consumer Advocacy - Advocacy groups can hold tech companies accountable for their products and push for increased transparency and fairness in the tech industry.