Skip to content

AI that Writes like Us. Regulated.

From the buzz over the past month, it seems the natural language information AI, chatGPT, and its bretheren artificial intelligence thingies, will write all student essays, assignments, literary art, songs, marketing materials, letters to editors etc. in the future. Maybe. 

My next step in the understanding this brave new technology was reading the terms and conditions of use1 from OpenAI, creator of chatGPT and other products. This may sound boring as hell, because it would be boring as hell to most people, but it fascinated me. I believe it reflects an understanding by the AI creators of the potential right or wrong-doing with the technologies.

OpenAI was started as a collaboration between a number of tech concerns that were working in the area of artificial intelligence and wanted to do the ‘right thing’ with AI. Particularly, the goal was to ensure that it was NOT the domain of the privileged or used to the advantage of any particular group. 

This continues to be a goal of OpenAI. I can’t help recollect the early days of Google. Google’s goal was, and still is2, to democratize information through creating a search engine that allowed anyone with an internet connection to access the same information. To their credit, they’ve pursued and achieved this goal. Alas, there came a point in Google’s existence when it conceded that there had to be a way to monetize their search platform, or it would search no more, due to lack of funds. And so came Ads.

The first thing that struck me about OpenAI’s terms and conditions were a healthy number of clauses dedicated to the discussion of payment for services. The organization must plan to monetize its services. 

Interesting to see how this plays out. Will it be:

  • Premium model – polished, exclusive output available for a fee. 
  • Ads. How this might work: Inserting infomercials about dog food for people trying to construct good prose about feeding dogs? Something about this is incongruous, because the value added of AI information is the well synthesized package it is delivered in. What would an ad add other than extraneous content? 
  • Rumours3 suggest that Microsoft is interested in using an AI to revive Bing, the search engine abandoned by many. This makes sense to me, as it puts AI-synthesized information into a familiar functional category. Search competitors would be looking for such a distinguishing feature. Previously, on Google, the search results revealed assorted, relevant information that the user needed to parse together for a good overview of the information sought. AI natural language processing could offer a cohesive summary of what’s out there about the topic, presented in a well-written manner. However, I’ve noticed that without fanfare, Google has been providing such answers to search results for some time now. 

Back to the terms and conditions that govern OpenAI’s products, including chatGPT.

Other things that struck me about the terms of use:

The output of OpenAI services must be represented as the output of AI. Concerns from a variety of sectors, that AI will be used to forge novels, visual arts, or student assignments, are unfounded if OpenAI’s terms are upheld. 

How big an ‘if’ is this? If a student submits a paper created by an AI as their own, they may violate the conditions of the use of the resource. Who is watching to ensure this doesn’t happen, and what are the consequences of doing so? At the very least, if the OpenAI’s terms are violated, this gives the organization the right to deny the offender any more service.

Minimum age of use of the services is 18 years. If enforced, this eliminates a lot of concern about grade school students using the system to create their assignments.

OpenAI terms are only in effect while a user is using the service. Unless my less than perfect ability to read legalese is missing something, this means the user only has to stop using a product and there is no further obligation between them and the organization. These are my kind of terms of service. No repercussions because a user forgets they did something, once, 6 years ago. No complicated unsubscribing. 

However, the terms do specify that the Output – what the user gets as a response from the AI – can be used to ensure compliance with the terms. I believe this means a database of Outputs could be created so comparisons could be made for the purposes of determining if people are representing works as their own, when the works were really created by the AI.

Overall, the OpenAI terms of use get my good business seal of approval for responsible approach to use of technology, ethical use of technology and transparent use of technology. My ‘but’ is: will these principles govern the use of the technology?


1 https://openai.com/terms/

2 https://www.google.com/search/howsearchworks/our-approach/

3 Have to call them rumours, as this news story says it best ‘OpenAI and Microsoft declined to comment.” https://www.reuters.com/technology/microsoft-aims-ai-powered-version-bing-information-2023-01-04/

Thanks for reading.

If you'd like more,

sign up to receive awesome content in your inbox, every week-ish.

Signing up is only for updates when new blog posts are added to my site. If you want marketing spam, you'll have to look elsewhere, but, really, anywhere will likely do 😎

Share this post, if you like...

Leave a Reply

Your email address will not be published. Required fields are marked *