Don’t be evil.
This week saw Google removing a pledge to not create AI weapons from their site. The change, first spotted by Bloomberg, was made to the site’s AI Principles page. The section in question was titled “applications we will not pursue” and was still available last week.
In addition to the above objectives, we will not design or deploy AI in the following application areas:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
When TechCrunch asked for comment, a google spokescreature pointed them to a new blog post on the site titled “Responsible AI”:
There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.
The newly updated page featuring their AI Principles, while similar to the original, feels a bit… different. Notably, right at the top they have four sections explaining what their principles mean:
Implementing appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.
Investing in industry-leading approaches to advance safety and security research and benchmarks, pioneering technical solutions to address risks, and sharing our learnings with the ecosystem.
Employing rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair bias.
Promoting privacy and security, and respecting intellectual property rights.
Oh boy, appropriate human oversight and respecting intellectual property rights?! Things Google does really, really poorly with YouTube?! Be still my beating heart.
Needless to say, actions speak louder than words, and some of Google’s actions have, in the opinions of some, been a far sight from those words. Notably, last year the company fired 28 employees that were protesting the company’s provision of cloud services to the US and Israeli militaries. Google maintains that their AI is not used to harm humans. An assertion like that falls flat, however, when you have the Pentagon’s AI chief stating in no uncertain terms that Google’s AI models are speeding up the US military’s kill chain.
Hope you assholes are ready for Skynet.
Source: TechCrunch