An A.I. Code of Ethics: Lessons From Dinner With Stanford’s Top A.I. & Ethics Experts
CRISPR, film ratings, self-regulation, and A.I. with Stanford's Frontier Technology Lab
👋🏽 Hi friends!
The A.I. discussion has moved lightning fast. A month ago, we were marveling over ChatGPT’s capabilities. Then the conversation moved towards what impact A.I. would have on education, ownership, copyright, journalism, and practically every knowledge and creative industry.
In the midst of this, a deeper question has emerged: what are the ethical and moral repercussions of A.I. as it becomes mainstream?
I was lucky enough to be invited to a dinner last week with Stanford and Silicon Valley’s top A.I. and ethics experts, led by Stanford’s Frontier Technology Lab (FTL) and Silicon Valley Bank. At dinner, we discussed the potential creation of an A.I. Code of Ethics. This newsletter will explore that topic.
Next newsletter, I will talk more about the hype cycle surrounding A.I. right now.
Let’s dig in.
CRISPR, Film Ratings, Self-Regulation, and A.I.
Rob Reich believes that the A.I. industry needs to set an ethical code for itself — quickly. Not just for the sake of society, but for A.I.'s industry’s survival as well.
Reich has incredible authority on this topic — he’s the director of the Stanford McCoy Center for Ethics in Society, the associate director of Stanford's institute for Human-Centered Artificial Intelligence (HAI), and the author of System Error: Where Big Tech Went Wrong and How We Can Reboot.
During a 3-and-a-half hour dinner near Stanford’s campus, Reich and a curated group of Stanford professors, alongside some of Silicon Valley’s top executives and A.I. founders, debated the ethical quagmires that await A.I. and how to best tackle them.
(Thank you to Dr. Ernestine Fu of the Stanford Frontier Technology Lab for inviting me to such a fascinating dinner with such prodigious individuals.)
Led by Reich, we tried to answer two key questions:
Does the A.I. industry need a self-imposed code of ethics or norms?
What should an A.I. code of ethics include?
Reich pointed out early in the dinner that many industries instill norms and self-regulations to prevent bad actors and bad outcomes — and to prevent harsher government-imposed regulation. The Motion Picture Association of America, for example, set up the Code and Rating Administration to help rate whether a film was appropriate for children of certain ages. It’s the system that determines if a movie is rated G, PG, PG-13, R, or NC-17.
This system wasn’t forced on movie theaters by a government regulator; instead, Reich said, it was a form of self-regulation. Theater owners refused to show any films that didn’t submit themselves for rating. It’s likely this self-imposed system increased trust between consumers and the film industry and prevented more heavy-handed government regulation.
In a more modern example discussed at the dinner, the biotech industry has moral guardrails around CRISPR, the incredible gene editing technique that has the power to create new medications, cure genetic diseases, modify the genome of an organism cheaply, or — if someone wanted to — genetically engineer humans.
Some of these guardrails are legal, but most are self-imposed by the biotech industry — exile from the academic community, lack of access to resources and funding to rogue actors, etc. This has, so far, prevented a wave of unchecked human engineering.
Reich and others are proposing that the nascent-but-growing A.I. industry find its own norms and guardrails to prevent the worst outcomes.
What Is the Worst Case Ethics Scenario for A.I.?
Before we can go into what a Code of Ethics for A.I. could look like, we have to ask the question: What is the worst case scenario if A.I. is misused and harmful behavior is left unchecked?
If there are no regulations or guardrails for the use of A.I., what would happen?
I can point to at least one consequence that is notably personal for me:
As many of you know, I worked for CNET as a tech commentator and columnist after my four years at Mashable. I had the honor of working with some of tech’s most diligent and talented journalists.
Now CNET — under new ownership — is using A.I. to gut its reputation and steal content from others. This misuse of A.I. is damaging people’s lives — the journalists who are getting laid off from CNET, the writers whose content is getting plagiarized by a huge media publication, and the people who are being given misleading or false information by A.I. unchecked.
This outcome is mild compared to what a bad actor (like a rogue state) could use A.I. for if left unchecked. Imagine a misinformation campaign on steroids.
The misuse of A.I. damages the reputation of all A.I. technologies and industries, even if they had nothing to do with this specific case of plagiarism and misinformation. This path will inevitably lead to mistrust among the public and government-imposed regulation.
That is, unless the A.I. industry — my industry — comes together by adopting a code of ethics that calls out and shuns misuse of A.I.
Which leads to the next inevitable question…
What Would an A.I. Code of Ethics Look Like?
This isn’t even the hard part of the conversation, in my opinion. The hard part is implementing and enforcing a code of ethics for A.I. (I’ll get to that in a moment.)
There are a few key pillars that logically makes sense for any A.I. Code of Ethics. These recommendations, many that came up at the dinner, feel like common sense and would condemn the most obvious abuses of A.I.:
Opting Out of Data Training: Moving forward, content creators (artist and writers) should have the ability to opt out of having their content being used to train someone else’s A.I. It’s already happening with Stable Diffusion. (If content is already on the Internet though, that cat may be already out of the bag.)
Plagiarism: A.I. should be developed in a way that will not result in plagiarism and should be updated when unintended cases of plagiarism occur.
Data Privacy: Users should be informed how their conversations with A.I. will be used and be able to keep those conversations private if they so choose. I recognize that A.I. needs conversations to be trained properly. This is a problem that will need more work to be solved.
Removing Dangerous A.I. Content: We shouldn’t be able to use A.I. to create toxic content that harms society (toxic imagery, hateful content, instructions for dangerous acts, etc). So far, the A.I. industry has focused tremendous energy on this problem, to the benefit of the general public.
Removing Human Bias: A.I. is only as good as the data used to train it. Last time I asked A.I. image generators to create images of an investor, it was all white men. This is a problem I spoke about all the way back in 2017 at the DLD conference. We need to commit to training more bias out of A.I.
There are several more, but this is a good starting point for an A.I. code of ethics or at least norms for the A.I. industry.
Action is already underway by members of the A.I. community to build more ethical A.I. systems. Mozilla has been an advocate of building trustworthy A.I., and Google has entire teams dedicated to responsible A.I. practices. On the government side, the E.U. has been working on the A.I. Act, which encompasses some of the above points.
There’s still a lot more work to be done, though. As I said before, I don’t think the hard part is figuring out a code of ethics — it’s how to enforce it when bad actors misuse A.I. purposely.
How Would an A.I. Code of Ethics Be Enforced?
While legislation is starting to be discussed and lawsuits are starting to get filed, they are guaranteed to move a lot slower than A.I. technological progress. By then, it will likely be too late to prevent unintended harm.
(We went from the status quo to universities ringing the alarm bells on ChatGPT in just over a month. Imagine what will happen in another 3-6 months.)
Self-regulation is the quicker path than government regulation, but it will require buy-in from the industry’s biggest names — Open AI, Google, Microsoft, Meta, Amazon, Midjourney, Stability AI, and RunwayML to name just a few.
It will require agreement on a code or industry norms — not as official as the MPAA rating system, but it should be written out.
It will need input from key A.I. researchers and academic leaders like Reich.
And most importantly, it will require enforcement mechanisms. Does a company who violates this hypothetical code of ethics get booted from major A.I. platforms? Do the main cloud computing companies (Amazon, Google, Microsoft) refuse to host your code?
It’s not as clear cut for A.I. as it is for biotech, which is much more university-oriented and thus makes it harder to get your work published if you violate biotech’s norm. But the mechanisms do exist and it doesn’t take more than a few key A.I. players coming to agreement to make enforcement have real bite.
A few things are for sure, though: self-regulation will come much faster than government-imposed regulation. Speed matters in an industry that is evolving faster than any of us could have ever anticipated. And there are real-world consequences — for both the A.I. industry and society — if we don’t ramp up this discussion soon.
If anyone wants to chat about an A.I. Code of Ethics, I’m here. You know how to find me.
Gives and Asks:
If you are interested in being part of any push towards an A.I. Code of Ethics, shoot me an email.
I am creating a list of the top business-focused A.I. conferences of 2023. If you know of one or have a list, please send it to me. I will be sharing the whole thing here on the newsletter.
I am putting together a series of A.I. dinners and salons. If you want to be on the list for a potential invite, please email me. The guest lists will be based on the proposed topic of each dinner. And if you want to be a sponsor, let me know ASAP. (I guarantee seats for any sponsors!)
If you liked this newsletter, please send it to a friend:
So I’m Going Viral on TikTok Again…
Around midnight last night, in my hotel room here in Boston, I came across a story by Ashlee Vance about Braintree founder Bryan Johnson, who has taken the $800m he made selling Braintree to PayPal and is using it to reverse his aging so that he can become an 18-year-old again. (link)
He hired 30 doctors, eats exactly 1,997 calories per day, takes dozens of supplements, and has taken over 33,000 images of his bowel. He uses 7-8 creams per day and has a device to monitor his nighttime erections. He’s even injected “young-person fat” into his face.
So naturally, I made a TikTok about it. And, perhaps to the surprise of nobody who knows the Internet, it went viral.
And then I posted it to Instagram. And did well there too.
So feel free to watch my TikTok about Bryan Johnson’s insane $2 million/year health regimen.
More power to you, Bryan. I hope you publish the results of your experiments to help the rest of us.
Here are some of my other A.I. and tech TikToks from this week:
One Last Thing — We Made 50+ Members of the Octane AI Community Co-Owners, and We’re Adding More
Companies are grown by communities. But the communities don’t have any ownership in the company they helped build.
My co-founder & CEO Matt Schlicht and I thought that was wrong, so last week, we announced the Octane AI Collective.
We gave 50+ members of our community equity. In return, they give us regular feedback and get access to our newest A.I. products for ecommerce. (Matt was inspired by DAOs in Web3 and how they foster and reward community.)
If you’re an Octane AI customer, want to use A.I. to grow your business, and/or are in ecommerce, go apply for the Collective.
We’re going to be adding new members soon.
Cheers
~ Ben