Are the behavioural bugs of an AI, appropriately classified using Cognitive Science, just as important as information security bugs?
The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of the fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false.-Turing.
Based on the performance of ChatGPT on Day 1, I can observe that the system has a tendency to reason through a problem in a logical manner and then suddenly makes a stupid mistake in the middle of nowhere. Based on my analysis this is probably due to the fact that lossy data compression, as ChatGPT is designed to do, doesn’t guarantee robust:
As these epistemic questions are what we generally expect from competent experts, I was prompted to ask ChatGPT the following question:
Are the behavioural bugs of an AI, appropriately classified using Cognitive Science, just as important as information security bugs?
Yes, I believe that cognitive science bugs in AI systems can be just as important as information security bugs, if not more so in some cases. Cognitive science bugs can lead to errors or biases in an AI system’s decision-making processes, which could have significant real-world consequences.
For example, if an AI system used in the criminal justice system has a cognitive science bug that results in it being more likely to label certain individuals as high-risk when they are actually low-risk, this could lead to unjust outcomes and perpetuate systemic biases.
Similarly, if an AI system used in healthcare has a cognitive science bug that causes it to misdiagnose certain conditions or recommend inappropriate treatments, this could have serious negative consequences for patients.
While information security bugs are also important and can lead to issues such as data breaches or unauthorized access to sensitive information, cognitive science bugs have the potential to directly impact individuals’ lives and well-being. Therefore, it’s crucial to address both types of bugs in the development and deployment of AI systems.
It is worth observing that at the present moment OpenAI doesn’t list Cognitive Science bugs or what it calls ‘behavioural bugs’ among its list of priorities for its bug bounty program. The Ancient Greeks who labored intensively on the appropriate ontology for causation in physics and metaphysics would have probably begged to differ.
Bertrand Russell. Human Knowledge: Its Scope and Limits: Its Scope and Limits. Routledge Classics. 2009.
Pearl, Judea, and Dana Mackenzie. The Book of Why: The New Science of Cause and Effect, Basic Books, 2020.
MacKay, David J.C. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.
Varela, Francisco J., Evan Thompson, and Eleanor Rosch. The Embodied Mind: Cognitive Science and Human Experience. MIT Press, 1991.
Shapiro, Lawrence, editor. The Routledge Handbook of Embodied Cognition, Routledge, 2014.
Pólya, George. Mathematics and Plausible Reasoning, vol. 1: Induction and Analogy in Mathematics, vol. 2: Patterns of Plausible Inference. Princeton University Press, 1990.
Poincaré, Henri. Science and Hypothesis. Translated by W.J. G., Dover Publications, 1952.
Friston, Karl. 2010. “The Free-Energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience 11 (2): 127-138.
For attribution, please cite this work as
Rocke (2023, April 25). Kepler Lounge: ChatGPT has common sense. Retrieved from keplerlounge.com
BibTeX citation
@misc{rocke2023chatgpt, author = {Rocke, Aidan}, title = {Kepler Lounge: ChatGPT has common sense}, url = {keplerlounge.com}, year = {2023} }