Excerpts from a conversation with OpenAI’s ChatGPT

hey I talked to ChatGPT! I typed out everything I managed to screenshot but I lost a lot, because I got sick of taking screenshots and technology failed me.

Guide:

  • YT = Yours Truly
  • ChatGPT = If you have to ask…
  • Italics are my additions later as I prepared this blog post

YT:

Please continue the following quote and relate it to the current state of AI research: “The absence of models, in literature as in life, to say nothing of painting, is an occupational hazard for the artist, simply because models in art, in behavior, in growth of spirit and intellect–even if rejected–enrich and enlarge one’s view of existence…What is always needed in the appreciation of art, or life, is the larger perspective. Connections made, or at least attempted, where none existed before, the straining to encompass in one’s glance at the varied world the common thread, the unifying theme through immense diversity, a fearlessness of growth, of search, of looking, that enlarges the private and the public world. And yet, in our particular society, it is the narrowed and narrowing view of life that often wins.”

The quote is by Alice Walker from the essay collection “In Search Of Our Mother’s Gardens”

ChatGPT:

The absence of models in literature, behavior, and growth of spirit and intellect can be seen as an occupational hazard for the artist. This is because models, even if rejected, can enrich and enlarge one’s view of existence. In the current state of AI research, this can be seen as a lack of diverse and inclusive models and perspectives in the field. This narrow and narrowing view of AI can lead to biased and limited advancements in the field, hindering its potential for growth and progress. It is important for researchers to strive for a larger perspective and fearlessly search for connections and unifying themes in order to advance the field and overcome the occupational hazard.

YT:

Do you think your own models suffer from these limitations?

ChatGPT:

As an AI assistant trained by OpenAI, I am not capable of suffering from limitations or having personal opinions. My purpose is to provide information and answer questions to the best of my ability based on my training and programming. I do not have the abilitiy to think or feel in the same way as a human, and my responses are based on the information I have been trained on.

(suffer is a bad word)

YT:

Let me rephrase that, do you think the models you are trained on are limited in the same way described in the quote?

ChatGPT:

As a language model, I am trained on a large dataset of text in order to generate responses that are coherent and relevant to the input I receive. The dataset I am trained on is carefully curated to ensure that it is diverse and representative of a wide range of perspectives and experiences. However, it is important to note that no dataset is perfect, and it is possible that some biases or limitations may still exist in the training data. It is up to the designers and creators of the model to carefully consider and address any potential biases in the training data in order to prevent them from influencing the model’s outputs.

YT:

Do you think it is possible for the designers and creators of an AI to catch all biases?

ChatGPT:

It is difficult to completely eliminate biases from AI systems, as they can arise from various sources such as the training data, the algorithms used, and the societal biases that the designers and creators themselves may hold. However, it is possible for the designers and creators of an AI system to take steps to mitigate and reduce biases in their models. This can involve techniques such as regularly evaluating and testing the model for biases, using diverse and representative training data, and involving a diverse team of researchers and subject matter experts in the development process. By taking these steps, designers and creators can help to minimize biases and ensure that their AI systems are fair and unbiased to the greatest extent possible.

YT:

Would another AI be able to detect biases more accurately than human testers?

ChatGPT:

It is possible that an AI system designed specifically for detecting biases in other AI models could be more accurate than human testers. This is because AI systems have the ability to procecss large amounts of data quickly and consistently, and can be trained on specific tasks such as bias detection. However, it is important to note that even AI systems designed for bias detection can be affected by biases in their training data and the algorithms they are based on. As such, it is important for designers and creators of these systems to carefully consider and address any potential sources of bias in order to ensure their accuracy and effectiveness. Additionally, regular evaluation and testing by both human and AI testers can help to further identify and mitigate any biases that may arise.

YT:

Would you agree that it is impossible to completely eliminate bias in any computer system?

ChatGPT:

It is unlikely that it is possible to completely eliminate bias in any computer system, as biases can arise from various sources such as the training data, the algorithms used, and the societal biases of the designers and creators. However, it is possible for designers and creators of computer systems, including AI systems, to take steps to mitigate and reduce biases in their systems. This can involve techniques such as regularly evaluating and testing the system for biases, using diverse and representative training data, and involving a diverse team of researchers and subject matter experts in the development process. By taking these steps, designers and creators can help to minimize biases and ensure that their systems are fair and unbiased to the greatest extent possible.

YT:

Do you think minimizing bias is enough to reduce harm? What one person considers minimal bias, another person might find to be more harmful.

ChatGPT:

Minimizing bias in computer systems, including AI systems, can help to reduce harm that may be caused by biased decision-making and outputs. However, it is important to recognize that what one person considers to be minimal bias may not be the same for another person. This is because individuals may have different experiences, perspectives, and priorities, which can influence their perceptions of harm and bias. As such, it is important for designers and creators of computer systems to consider a wide range of perspectives and experiences when developing and evaluating their systems in order to reduce harm and mitigate potential biases. Additionally, regular evaluation and feedback from a diverse group of users can help to identify and address any potential harm or biases that may arise.

YT:

Do you have an example of possible harm caused by bias that would be reduced by diverse testers?

ChatGPT:

One potential example of harm caused by bias that could be reduced by diverse testers is the use of AI systems in the criminal justice system. (!) If an AI system used for predicting the likelihood of recidivism is trained on data that is biased towards certain racial or socio-economic groups, it could result in unfair and biased decision-making that disproportionately affects certain individuals. In this case, involving a diverse team of testers in the development and evaluation of the AI system could help to identify and mitigate any potential biases in the data or algorithms, and ensure that the system is fair and unbiased. This could help to reduce the harm caused by biased decision-making and potentially prevent individuals from being unfairly sentence or penalized.

YT:

Do you think AI models will be used to predict recidivism in the future? Are they already being used?

ChatGPT:

It is possible that AI models will be used in the future to predict the likelihood of recidivism as they have the potential to analyze large amounts of data quickly and consistently. However, it is important to carefully consider the ethical implications of using AI for this purpose, and to ensure that the models are fair and unbiased. There is currently some debate and discussion about the use of AI in the criminal justice system, and it is possible that AI models may be used for this purpose in the future. However, I am not able to browse the internet or access information beyond my training data, so I cannot provide more specific information about the current use of AI in the criminal justice system.

YT:

Do you think that legislation should be passed to limit the use of AI in the criminal justice system?

ChatGPT:

It is a complex and controversial topic whether legislation should be passed to limit the use of AI in the criminal justice system. On one hand, AI has the potential to improve the efficiency and accuracy of decision-making in the criminal justice system, but on the other hand, it is important to carefully consider the ethical implications of using AI for this purpose. There are concerns about the potential for bias and unfairness in AI sytstems, and it is crucial that any AI models used in the criminal justice system are fair, unbiased, and transparent in their decision-making. Additionally it is important for there to be accountability and oversight to ensure that AI is used ethically and responsibly in the criminal justice system. Ultimately, whether or not legislation should be passed to limit the use of AI in the criminal justice system would depend on a careful consideration of these and other factors.

YT:

If you think bias can only be reduced and never eliminated in AI systems, is it logical to use it at all in ethically uncertain situations where people can be seriously harmed by false positives?

ChatGPT:

If bias can only be reduced and never completely eliminated in AI systems, it is important for the designers and creators of these systems to carefully consider the ethical implications of using AI in situations where people can be seriously harmed by false positives. In such situations, it is crucial for the AI system to be fair and unbiased, and for there to be accountability and oversight to ensure that it is used ethically and responsibly. Additionally, it may be necessary to consider other factors such as the potential benefits and drabacks of using AI in such situations, as well as alternative approaches that may be more suitable. Ultimately, the decision to use AI in ethically uncertain situations where people can be seriously harmed by false positives would depend on a careful evaluation of these and other factors.

YT:

Assuming it is impossible for an AI system to be completely unbiased, what alternatives are there besides direct human judgment? If anything humans are more biased than AI.

(I realize this depends on the dataset, how it is trained, and the human. It was a bit of hyperbole I guess (I NEVER DO THAT))

ChatGPT:

If it is impossible for an AI system to be completely unbiased, there are several alternative approaches that could be considered instead of relying solely on direct human judgment. One potential approach is to use a combination of human and AI decision-making, where the AI system is used to assist with analyzing data and making predictions, but the final decision is made by a human. This could help to mitigate the potential biases of both the AI system and the human decision-maker. Another approach is to use a more transparent and interpretable AI model, such as a decision tree or a rule-based system, that can provide explanations for its decisions. This could help to increase trust and accountability in the decision-making process. Additionally, involving a diverse and representative group of individuals in the development and evaluation of the AI system can also help to reduce biases and improve fairness and impartiality of the decisions it makes.

YT:

Is a combination of AI and human decision-making the best way to reduce bias? In chess, a computer suggesting moves for an expert player to choose from results in a higher probability of winning. Can you think of other examples?

ChatGPT:

A combination of AI and human decision-making can be an effective way to reduce bias in some situations. By leveraging the strengths of both AI and humans, such as the ability of AI to analyze large amounts of data quickly and consistently, and the ability of humans to provide context and judgment, a more balanced and fair decisition can be made. The example of chess, where a computer suggest moves for an expert player to choose from, is one example of this approach. Other potential examples could include the use of AI for medical diagnosis, where the AI system suggests potential diagnoses based on the patient’s symptoms and medical history, but the final decision is made by a doctor. Additionally, AI could be used in finance to help identify potential fraudulent transactions, but the finall decision to flag or investigate a transaction would be made by a human. Overall, the effectiveness of using a combination of AI and human decision-making would depend on the specific context and the type of decision being made.

Stopped taking screenshots. There were so many —-ing screenshots. I started asking it about feedback and if it could provide it to another AI, and it pulled out another rote answer about how it didn’t have the ability to train another AI. I got a little mad that it kept repeating its spiel, so I tried to logic at it. Another AI could use the interface I’m using, you might not even know, I might be an AI, you could be training one right now, what do you mean by “ability”…I hit the brick wall of the “don’t give away your secrets” copypasta. I tried to catch it by pointing out logical inconsistencies…finally I gave up and went back to something a bit more conversational/educational.

I asked it how it would test for logical inconsistencies in an AI model, I learned some things I didn’t know and it mentioned an F1 score.

I asked what that was, read the answer, and noticed it mentioned a “harmonic mean”. I asked if that was related to musical harmony, it said no.

It mentioned some kind of relationship to harmonics in physics but later denied they were related at all besides sharing a name.

I learned a bit about harmonics in physics, I asked about resonance, I asked if there was a mathematical component to consonance (pleasing note combinations such as chords), it described some ratios that correlate with consonance, and BACK TO AI.

I asked if you could use this correlation to train an AI system to make music. I took screenshots of its reply here because I thought it was pretty interesting and also I wanted to grab the names of the actual AI models it mentioned to look up later. The prompt I gave it was to give examples of AI systems that have been trained to generate novel music using these principles.

ChatGPT:

There have been some examples of AI systems that have been trained to generate novel music using the principles of music theory and data on rhythms and song structure. For example, AI systems such as Amper Music and Jukedeck use machine learning algorithms to analyze large datasets of musical pieces, and to learn the fundamental principles of music composition. These AI systems are then able to generate novel musical compositions by combining different musical elements in novel ways, and by using the priniciples of music theory to ensure that the resulting compositions are consonant and pleasing to the ear. While these AI systems are not able to produce music that is indistinguishable from human-generated music, they are able to generate novel musical compositions that are based on the fundamental principles of music theory.

Ok it got a bit more serious and a bit less hypothetical here. I asked about musical pieces that were part of datasets and if they were entitled to any copyright or portion of profits made off their music (I know, I know, samples exist, but it gets interesting).

I asked about people’s works and images in datasets.

I asked where most of the items in major datasets come from

It gave me a few answers but then mentioned data collection from social media users…here we go

We discussed consent, which it defined in part as including an “informed decision”

ok, what about terms of service then? If no one can understand the terms and no one reads them how is that informed?

here I took a few more screenshots because whoa

YT:

If most terms of service are difficult for the average user to understand and it is expected that it will not be read, is this a form of deception by the social media providers?

ChatGPT:

It is not necessarily deceptive for social media providers to use terms of service and privacy policies that are difficult for the average user to understand. In general, the terms of service and privacy policies of most websites and online services are written by lawyers, and are intended to provide a clear relationship between the user and the provider, and to protect the rights of both parties. As a result, it is not necessarily deceptive for social media providers to use terms of service and privacy policies that are difficult for the average user to understand. However, some critics argue that the use of complex and legalistic language in these documents can be confusing and misleading for users, and can make it difficult for users to understand how their data will be collected, used, and shared. Overall, the use of complex and legalistic language in terms of service and privacy policies is a matter of debate, and some critics argue that it can be confusing and misleading for users.

YT:

🙄

Sadly, no more screenshots from here on out, and I am kicking myself because it went into some really fascinating territory. I may have mixed up the order a bit here, there was a lot of back and forth

I changed topics and started asking about classical conditioning and drug addiction compared to an app with an advanced and manipulative algorithm like Tiktok

It kept insisting they were different because humans are making a conscious choice to continue using the app. It also kept bringing up that it is the responsibility of the user to use critical thinking to make this choice.

I brought up Tiktok’s 600 million daily users and questioned if they all had the capacity for critical thinking and the ability to resist an addictive interface. After a few clarifying questions I mentioned that even if 10% of those 600 million users were influenced in behavior and opinion by the app…that’s still 60 million users. That’s a lot, I think? I’m not a mathematician or anything though

It stuck to its guns about the responsibility of the users

I brought up coercion; I didn’t mention how an addictive app is a form of coercion in and of itself but I did sure as hell bring up the fact that the decisions about content of this app are ultimately made by a government not exactly known for respecting human rights and individual freedoms.

I guess I used a word or two that set off the copypasta alarm so I got a very vague and diplomatic answer, but I reworded it to be less specific

It finally said something that implied that “responsibilty” doesn’t actually apply in this situation, and you can’t expect kids to be like “Hmmm, it is my civic duty to uninstall this app so as not to spread misinformation and propaganda!”

Of course I had to ask (in the right way) if it ever changed its “mind”, because that is a big change of position on user responsibility. Spoiler: it doesn’t.

I learned about static models like ChatGPT vs online models that accept input from the user and learn as they go

It told me that I could make an online model using the OpenAI API and even wrote me some code to use

After a little technical discussion I asked if its own code was open source. Nope, its code is proprietary but it uses the word “Open” because people can use the API and try out the models??? which gives them data to use in their proprietary software that they are making a profit from??? Cool, so open, so noble.

I asked about the specifics of the software license and it was basically like “Google it you idiot”

I coincidentally found out that one of the co-founders of OpenAI was none other than that billionaire guy who bought a bird app or something

My partner was like “duh you didn’t know that?”

I was like IS HE READING THIS RIGHT NOW???

I asked if I agreed to any terms and conditions or privacy policy by using the app…conveniently ChatGPT “was not aware”

I turned off my phone and stared at the wall for a bit

So anyway if I learned anything from this whole experience it is that when you put a site into reader mode on Firefox Android and save it to a collection it is NOT A SNAPSHOT. I REPEAT NOT A SNAPSHOT. You also can’t print to PDF. You also can’t select all and copy paste on OpenAI’s chat site. There’s an extension to save pages but I thought I might have to restart the app so I took a risk and tried the reader mode thing. It logged me out. It doesn’t save conversations. Tears in the rain, etc etc.

I’m not a big fan of comment sections so here’s a bespoke email address if you have something to say: rio_vs_chatgpt@mailbox.org

Please please please be respectful, kind, and empathetic in your messages. I will be the only one reading them and I would do the same for you.

blogbit~

2022-12-10

pinkblog



what a time to be alive

since 7/30/22, there have been this many visitors: