• Turing Post
  • Posts
  • Interview: OpenAI GC Che Chang on Managing Risks and Rewards of AI

Interview: OpenAI GC Che Chang on Managing Risks and Rewards of AI

Plus guidance for enterprises from the industry trailblazer

We're pleased to launch 'Interviews with Innovators,' where we'll explore AI topics in-depth with experts from diverse backgrounds. Today, our guest is a person with lots of responsibilities on his shoulders: please meet Che Chang, General Counsel at OpenAI*

We’ve discussed how the OpenAI legal team uses ChatGPT, how they navigate legal uncertainty around new technology, and how they operate across different jurisdictions. Che also offered guidance for enterprises on navigating risks and liability when working with GenAI and provided insights into the future. Let’s dive in!

How OpenAI Navigates the AI Regulatory Landscape – Chat with GC Che Chang

Che Chang and Ksenia Se at Ai4 Conference, 2023

Ksenia: Let's do a little time traveling. On December 1st, 2022, we all woke up to a new world propelled by generative AI, and it was all instigated by OpenAI's launch of ChatGPT. So you, as a legal advisor, have to be always a few steps ahead. Did you anticipate what has happened? How did your professional life change since then?

Che Chang: It's changed a bit. ChatGPT has been a kind of a life-changing experience for a lot of people. At the time we were optimistic that it would do well, but we were not certain. We put it out there and were really surprised and amazed by the reception.

ChatGPT is based on technology that has been around for years – even prior versions that didn’t have this conversational interface could already be used on the OpenAI website for a long time. So that part did surprise us a little bit.

Something about having the technology in a way that was easily accessible and understandable to people, I think really resonated across the world.

Ksenia: A few of our readers asked if you use ChatGPT on a day-to-day basis?

Che Chang: We use it a lot internally. I remember when we were first testing GPT-4, it wasn’t always factually accurate, and I remember thinking, no one will ever use this for legal purposes because that's crazy, right? Your lawyer can't be wrong two out of ten times.

But there are companies and partners of ours who have tapped into it and figured out a way to make it useful, with context, access to the right databases, etc.

We use it internally for a lot of different reasons. One very popular internal OpenAI use case has been to use it to summarize meetings - having meetings recorded, transcripts created, notes summarized telling you who was there, action items, and follow ups.

Personally, my legal team uses it for things we want to publish or talk about publicly, and have it say "make this easier to understand," "write this in more plain English," "don't use legalese." It's very useful that way and has been very popular.

Ksenia: Have you gotten any good legal advice from it?

Che Chang: I have not gotten great legal advice from GPT yet, but maybe one day.

How To Navigate the Uncertainty

Ksenia: Currently, we are in the midst of unpredictability with LLMs. How do you navigate such an uncertain regulatory landscape and how does this influence your research and development process?

Che Chang: I've been fortunate to be in this field for a few years now. One thing that has struck me is that no one is really sure what the right answer is. Anyone who tells you they know for sure the right regulatory landscape and approach is probably not right.

Everyone is trying to figure it out – governments around the world, industry, academia. What you really do is think about precedent – what have people done in similar situations with innovative technology? Not having a clear regulatory landscape is not new at all. It happens every several years – with the Internet, mobile, and more.

This generation of tech is very similar – some things are new but fundamentally, there are analogies to compare it to. Based on history, how should you approach this in the future? It's kind of like how AI predictions themselves work – analyzing historical data to predict future outcomes. And then doing it in good faith based on correct principles and laws. Working with stakeholders, policymakers, etc. to do the right thing.

Ksenia: How often do you recalibrate your plan?

Che Chang: Probably weekly. I joke that one week at OpenAI is like three months in the real world. Things move very quickly.

Challenges Working Across Different Jurisdictions

Ksenia: OpenAI puts effort into building connections with governments. Can you share your experience with them? Recently, you’ve been traveling and meeting with them a lot!

Che Chang: I think everyone is excited and not sure of the best thing to do. That's true of every government agency we've talked to. When I first started doing this in a prior role, the first thing they would say is, what is this about? Why does it matter for my job and constituents?

Now people understand why it matters, but they don't know what they should do – how to regulate or deal with it. But they're genuinely trying to understand so they can do the right thing. This is the latest disruptive tech they have to deal with. Sometimes they don't like doing it, but there’s general recognition that it's important, here to stay, and will change the world. We shouldn't not have it, but should figure out how to build, use, and release it carefully and thoughtfully.

Ksenia: Che, what don't they yet understand about it?

Che Chang: A few years ago the what and how were complete mysteries. There's some understanding of the “what” now, but not yet a clear understanding of the “how.” They're trying to understand fundamentally how it works, risks, harms. The old paradigm of the Internet is you have a database and get data from there. Generative models do something very different – creating new things from scratch based on past learnings. That mental model is hard to wrap your head around. Once they understand that clearly it will make a lot more sense in terms of where to regulate it.

Ksenia: So it's a longer educational process for them?

Che Chang: I think it will be a very quick process. But a lot of the core educational pieces are happening now – how it works, the risks, etc. People hear about risks but also see good use cases. The details in between are still murky.

Ksenia: What are the particular challenges in working across different jurisdictions?

Che Chang: Different countries and regions have slightly different approaches to regulation. Some want one overarching framework, but that will be challenging – the term "AI" is so broad, it’s not a clear regulatory term. It means anything you want it to mean. It means any behavior that a machine can do, that a person can do. It’s going to be challenging to regulate all machine behavior with one law. We don't have one law that regulates human behavior. We have lots and lots of laws. So it’s going to be challenging.

Some countries are more sector specific, thinking about industries and use cases. Others are somewhere in between. But the theme is everyone wants to understand proper regulation.

Not a great outcome would be if every country have slightly different frameworks you have to think about when using these technologies.

We're hopeful that the global conversation leads to more standardized, understandable global standards. But international coordination on anything is challenging, especially new tech.

Ksenia: How does your team navigate this?

Che Chang: A lot now is education. We do a lot of traveling to go talk to them, meet with them in person. Recently, White House announced commitments from major AI labs. UK discussing a summit. Japan and others are stepping in too. We hire good people, read, follow up, and think hard about the issues.

For me, when you think about a disruptive new technology, you try to analogize what it’s doing to what people are familiar with. If you simplify the current generation of machine learning, it’s analyzing historical data to make a prediction about the future. With that framework you can anticipate a lot of regulatory concerns. There may be scenarios where historical data may be biased/inaccurate or not a good indicator to rely on. And a prediction about the future is not the right answer or a guarantee of an answer – there are scenarios where predictions are great, and others where predictions should be vetted by people, or where predictions shouldn’t be used at all. Balancing those will be key in any framework.

Ksenia: What would be your ideal way for policymakers to approach regulating this landscape?

Che Chang: Learn about what you are doing, think about positive stuff that is happening, think about areas such as highly regulated industries where relying on historical data and predictions may be dangerous and challenging – those are areas that will take a lot of time and focus. Painting everything with one brush will be hard. But I think the governments generally approaching it the right way – they are curious, worried, but still trying to genuinely understand so they can do the right thing. I haven't met any regulator or policymaker who doesn't get it at all – degree of education differs but all are eager, smart and ask nuanced questions. It's just a hard topic but they are approaching it the right way.

Guiding Enterprises: Understanding Liability and Risks

Ksenia: Let's talk implementation. Enterprises want to use cutting edge AI but struggle to determine what they can and can't do with it. What do you typically advise?

Che Chang: The most common introduction that we get from enterprise clients is they say,

Look, I read about AI the time and you know, we can't avoid it. We really need to do something. We're going to do something. What should we do?

This question resonates with a lot of people, right?

Large Language Models are a very general technology. How will it be useful in your business? Where is it implementable? It can be used literally anywhere. So from a risk perspective, I advise thinking what are the areas that you could try and test it out that are relatively low risk so that you can get a pretty good added lift without running through a bunch of potential legal exposure. Start with those, scope those out, build institutional knowledge and muscle to understand it and address questions. Start small and low risk, then expand to related areas, and – before you know – you are a leader in pioneering AI in your industry.

That's happened with a lot of the partners that we worked with, before ChatGPT really blew up, they spent the time and effort really digging in and going through the details and not just high level thinking. Where are the specific areas that AI is going to help us? How are we going to implement it? How are we going to clean up all that data that's there? How are we going to hook it up to all our different systems? The people who spent that level of intentionality to think about it are the ones who are really successful today.

Ksenia: When they ask about liability – who is liable for use of these tools?

Che Chang: It’s a hard and controversial topic. No one really knows – you’ve got developers who use AI systems, creators of systems like us, enterprises building things, users using tools. Like self-driving cars, there are questions around who is responsible when. With current tools, questions arise – if an end user puts something in that may cause liability, is responsibility with them, with the company providing it, with us? Different people have different responsibilities and oversight abilities. Those are discussions happening now with clients and governments.

Ksenia: Beyond starting low risk and going step-by-step, any other risk management advice for safe use?

Che Chang: One common question is whether to have a separate AI responsibility/ethics officer in each company or an organization. I think what you want is strong knowledge management practices and the ability to both take in new information about the field and the latest happenings and developments and also push it back out.

I had a good discussion with a big customer recently. They build a "center of excellence" within the company to get everyone up to speed. You don't want an internal org so large it dwarfs everything else. It may be better to build a small, specialized team that can go train the trainers. They learn it then evangelize out to the rest of the company - it’s easier to teach AI risk and concepts rather than learning entirely new business lines across the companies.

Ksenia: Restricting use of OpenAI tools in certain scenarios or industries. Should industries figure that out themselves? Or should OpenAI restrict them?

Che Chang: We definitely restrict use – we have policies around usage restrictions where we're uncomfortable that the technology is there, or we don't want our systems used that way. An easy example is regulated industries – you shouldn't use it for medical advice, you shouldn't use it for legal advice.

Beyond legally required, one example is that we don't want it to be used for large scale political campaigning. So we have restrictions on that kind of thing. And we've asked customers not to do those things when they’ve talked to us.

High risk areas are generally good to focus on, and some are areas we don't want to deploy in either.

The Future

Ksenia: As systems evolve, what legal challenges do you foresee and how is OpenAI preparing?

Che Chang: One is the overall regulatory landscape – what will global regulation look like? A patchwork by country is not a great outcome.

Industry specific applications will also be challenging – for example, what should AI look like in the medical sector? There’s so much promise in improving healthcare and medical objectives, and that's the number one question from governments. But it's a highly regulated industry, deservedly so. So we need to find the right balance which will be challenging.

Ksenia: How might the regulatory landscape change in 6 months?

Che Chang: I think it will be an accelerated version of what we see now. We're just at the start of the inflection curve. Despite headlines, this isn't widely adopted or used yet but has huge potential across different areas. As adoption increases, good and bad will emerge that everyone has to think about and balance.

We're going to see a lot more technological progression over the next several months. We're going to see a lot more regulatory focus and oversight. We're going to see a lot more companies figuring out their AI direction, not their entire strategy, but they're going to figure out where they want to go with this, what they want to build and what they want to do. We're just going to see continued growth around everywhere. AI is everything so there is unlimited surface area to cover. Next year this conference will probably be 10x bigger because everyone is hungry to learn and figure out how to do this right.

TWITTER LIBRARY

*This conversation happened as a Fireside Chat at Ai4 Conference in Las Vegas. You can watch it here:

Thank you for reading, please feel free to share with your friends and colleagues 🤍

Subscribe to keep reading

This content is free, but you must be subscribed to Turing Post to continue reading.

Already a subscriber?Sign In.Not now

Join the conversation

or to participate.