Comments
Michael,
Very interesting link. The trouble is the data set. If you want to stuff up a chat bot that is say giving customer service for an airline (this happened). There was a hacker group who trained the AI in to giving strange and offensive replies. The small data set had been intentionally corrupted.
The situation with ChatGPT is there is a very large data set. For false information to be made up is a worry. I like that the lawyers were caught out using fake cases. This points to either crafty programming which sounds good but is fake or stored misinformation.
It is hard to program that crafty response. It is also hard to know in advance what to provide as misinformation. You don’t know what will be asked. So how did it get a fake set of results?
It would look for articles and pick up fake articles as being trusted.
The Russians have been doing a lot of misinformation work for years (all sides do it). England said their children ate more carrots to be able to see further at night than German bombers to hide radar.
So how much wrong information is in the large data of ChatGPT? Time will tell.
Bruce
Bruce Williams, Tue 30 May, 04:02
Came across this story today and thought of your post.
Chat GPT’s behavior reminds me of spirit controls like Phinuit, who would give the correct answer if they knew it, but often would improvise a made-up answer if they didn’t.
If you read the comment thread at the link, there are a couple of helpful explanations of why and how Chat GPT does this. Essentially the program functions like the sentence completion option on a tablet’s keypad. The tablet isn’t trying to tell the truth or even make sense, it’s simply trying to offer the most likely term that follows from the term you just typed.
So maybe spirit controls are equivalent to AI? Or maybe our subconscious is AI-like, and it fills in gaps in channeled messages? Or maybe the controls are generated by the subconscious? Or …
My head hurts.
https://www.powerlineblog.com/archives/2023/05/ai-makes-st-up.php
Michael Prescott, Sun 28 May, 21:17
Dear all,
This AI article by Michael has been very interesting. “The Turing test was named after Turning. Robert Epstein, of the Cambridge Center for Behavioral Studies in California, says that Turing never pretended to give the details of a practical test. There have been arguments by John Searle (Chinese Room) against the adequacy of the Turing test to distinguish mechanical thought”. (p 158 Paradigms Regained John Casti). It is just a benchmark test to see when we are in trouble. Think HAL in 2001 Space Odyssey.
We had lots of discussions about AI and General Intelligence. There was work by Howard Gardner on Multiple intelligence.He first listed six types (musical etc) and in 2009, he also suggested two additional types of intelligence, existential (spiritual) and moral.
I believe that those with high spiritual intelligence are attracted to this blog.
In summary, a quote from Oscar Wilde The intellect is not a serious thing, and never has been. It is an instrument on which one plays, that is all.
Thanks to all for the performance.
Bruce
Bruce Williams, Mon 8 May, 05:58
Michael,
A ‘correction’ is in the mind of the one who corrects! - AOD
Amos Oliver Doyle, Sun 7 May, 14:26
Amos, You said something to the effect that AI seems to work much like Wikipedia in correcting itself. However, I have never seen Wikipedia correct itself on anything in the paranormal or spiritual venues. It remains steadfast in debunking all of it, while ignoring any efforts at correcting completely materialistic views. I do find it strange that AI is quick to accept corrections and apologize so often.
On to another subject Monday.
Michael Tymn, Sat 6 May, 21:12
Thanks to all for the additional comments. If you put “Adin Ballou” into a AI search, you will find that he was a prominent abolitionist and pacifist from the mid-19th Century. Nothing is mentioned about his psychical research. I discussed him in my blog in my entry of May 19, 2014, which is in the archives. I suggested that he might have been the first dedicated psychical researcher, as his book reporting on his psychical research was published in 1852, a year before that of Judge John Edmonds and Dr. George Dexter. I recently did a lengthy review of the book for the Spring issue of the Journal of Scientific Exploration. For anyone interested in that book, the reproduction can be found at http://ia800503.us.archive.org/21/items/expositionofview1852ball/expositionofview1852ball.pdf
Anyone reading Ballou’s book might very well ask what the Society for Psychical Research discovered 30-50 years later that Ballou hadn’t already discovered. And yet, I don’t recall any mention of him in the SPR records.
Michael Tymn, Sat 6 May, 20:29
Elizabeth, thank you for pointing out that my original family name was Polish and not Ukrainian. I was told the same thing by Dr. Alex Imich, the well-known psychical researcher from Poland, about 25 years ago. I was surprised that AI did not provide any Polish link. I do know that my grandfather was born in Austria in an area that belonged to Ukraine but was given to Poland after World War I. He grew up in East Galacia and spoke Ukrainian. No doubt there was much mixture of countries. I take pride in being both Ukrainian and Polish, even though my DNA tests show me to be more Irish and English (from my mother’s side) than anything else.
Michael Tymn, Sat 6 May, 20:07
Bruce,
This sounds a lot like how Wikipedia operates. - AOD
“As an AI language model, I’m constantly learning and updating my knowledge from my training data and from the interactions I have with users. When a user corrects me, it’s an opportunity for me to learn and improve my responses in the future.”
Amos Oliver Doyle, Sat 6 May, 14:05
Michael and others,
Additional information to my previous comment:
When your information is corrected your response is extremely fast. Do you re examine the data or do you agree with the person?
ChatGPT
When a user provides additional information or corrects any mistake in my response, I don’t re-examine the data. Instead, I use the new information to provide a more accurate answer to the best of my knowledge.
As an AI language model, I’m constantly learning and updating my knowledge from my training data and from the interactions I have with users. When a user corrects me, it’s an opportunity for me to learn and improve my responses in the future.
Bruce
Bruce Williams, Sat 6 May, 06:34
Amos,
Thanks for the reincarnation tips. If I have to come back, I may choose to do so in the Amish or Mennonite community, probably in Lancaster, PA.
Michael Tymn, Fri 5 May, 20:25
I hope AI correctly identified Tymczyszyn as a Polish name.
“-czysz-” is extremely Polish. As a Pole, I enjoy challenging my American friends to pronounce these lovely consonant combos.
Elizabeth, Fri 5 May, 17:59
Michael,
Sandwich Islands - did you know Captain Cook? Sandwich was the European name not that liked by the inhabitants. At least he didn’t use Pie and chips Islands.
We Australians like names that reflect the place, where I live was previously called Shark Bay but we needed to change it for the tourist trade. Too many Shark Bays.
I have been putting ChatGPT to the test with cybersecurity analysis. When you correct it it agrees too fast. The information was missing part of the solution so ChatGPT agrees too quickly.
Thanks
Bruce
Bruce Williams, Fri 5 May, 14:02
Our languages constantly evolve. That’s why to try to police the use of words is normally bad form and unwise. What is appropriate to do instead is to note the change, and be cognizant of it.
So appears to be the case with “AI.” What is called “artificial intelligence” nowadays is different from what it was for its founders and the term’s coiners.
Two good links on this :
https://www.wired.com/story/how-we-learn-machine-learning-human-teachers/
https://www.cs.bham.ac.uk/research/projects/cogaff/sloman-turing-test.pdf
Amos Oliver Doyle: On AI generating art, possibly the best and certainly the most wide-ranging discussion I’ve come across was here (https://www.youtube.com/watch?v=Y5QS_th9Ca0)
Dors, Eastern Europe, Fri 5 May, 12:20
Michael Tymn’s article concludes as follows: Applying Occam’s Razor, I conclude that Patience Worth was who she says she was –the spirit of a 17th Century woman from England.
My sentiment is in tune with that way of thinking, and with that conclusion.
Further, in the comments, you Michael quote Patience Worth: “Man’s law is precision, God’s is chaotic. Man’s wisdom is offensive to God, therefore He shows his displeasure in complications. To man the complications are chaos, thereby is man deceived.”
It sound to me that Patience Worth herself cautions us against placing great trust in Occam’s Razor.
The Irony of Life strikes again.
Dors, Eastern Europe, Fri 5 May, 11:43
Michael,
Regarding reincarnation you might want to consider a couple of other things. Reincarnation is like a Mission Impossible, “It is your mission if you choose to accept it!” And my guess is that you will be convinced that it is in your best soul’s interest to accept your mission however impossible it may seem.
You might also consider that time is not linear and you may be able to choose any period of time to reincarnate in. (Just remember that modern dentistry is a wonderful thing!) But I personally might want to choose the period 1880-1930 as an agreeable time for me, maybe in France or England.
And there are billions of other planets that you may be able to go to. All in all, I want to come back but not to the United States. Maybe to a less developed country perhaps in South America or the European alps. After reading and listening to many near death experiences I feel somewhat intimidated by those experiences and would hope for something more mundane, like “Summerland” to loll around in for a while. When I was young I wanted to come back right away but now, from this perspective I agree with the poet Kahlil Gibran when he said “A moment of rest upon the wind, then my soul will gather sand and foam for another body and another woman shall bear me.” I look forward to that moment of rest upon the wind. - AOD
Amos Oliver Doyle, Thu 4 May, 17:32
Amos,
Thanks for the link. I’ll watch it in the morning. It’s bedtime here in the Sandwich Isles. However, I agree with you that the insanity in the world today suggests we are doomed. I hear something totally insane every day and it seems to get more insane each day. California appears to be to be leading the way, but the nation as a whole is keeping pace with California. I’m glad I’m 86, not 16, and sure hope that reincarnation is not a fact, at least in the way it is usually thought of.
Michael Tymn, Thu 4 May, 11:47
Here is a video about AI generated artwork. As you can see, AI can do a lot more than answer questions and write reports. – AOD
https://www.youtube.com/watch?v=gQfQiXP9yZA
Amos Oliver Doyle, Wed 3 May, 19:49
I don’t know why there is so much enthusiasm about ChatGPT. When I play with it, it is horrendously wrong in the information it provides. But I suppose since we all live in the ‘Age of Misinformation’, ChatGPT fits right in with the lack of intelligence, laziness and stupidity of most people in America today. Civilization is doomed! - AOD
Amos Oliver Doyle, Wed 3 May, 18:08
Michael,
You might like this article about how journalists are using AI to generate content. https://www.theguardian.com/technology/2023/may/02/chatbot-journalists-found-running-almost-50-ai-generated-content-farms?CMP=oth_b-aplnews_d-3
The godfather of AI (Geoffrey Hinton) has now left Google and was talking freely about how bad people will use ChatGPT. He also mentioned the simple reasoning level of the software.
AI has different impact in different sectors. The marketing sector has a decision based on probability, based on an analysis of the previous sales Bell curve. I taught start up companies (in a no Bell curve environment as no previous sales) and they made decisions on instinct.
It will be interesting where we trust the reasoning of AI. The common thread is behaviour. Marketing understands buyer behaviour and cybersecurity understands threat actors behaviour.
Bruce
Bruce Williams, Wed 3 May, 03:20
I took issue with AI’s comment that Alfred Russel Wallace believed in the “possibility” of consciousness surviving death. I pointed out that there is a big difference between believing in the possibility of something and believing in it. I mentioned that Wallace said the evidence for survival was as good as the evidence for evolution, of which he is considered co-originator with Charles Darwin.
AI replied: “You are correct that Alfred Russel Wallace’s writing suggests that he believed in the survival of consciousness after death to some extent, and that he saw it as being supported by evidence. I apologize for any confusion caused by my previous response.”
Michael Tymn, Wed 3 May, 01:44
Bruce, thanks for the comments. I don’t see AI as a threat to the spirit/survival hypothesis, but I am amazed at its ability to create and interpret comments. I can see that it is a threat to academia if students are able to simply plagiarize AI in their papers and not do any research or writing of their own.
As mentioned in an earlier comment, AI will not dare tread on spiritual matters. For example, it didn’t say Alfred Russel Wallace believed in life after death. It said he believed in the “possibility” of life after death. As you know, there is a big difference between believing in it and believing in the possibility of it. At least, it doesn’t attempt to debunk all spiritual matters as Wikipedia seemingly does.
In my discussion yesterday with AI, it mentioned that some writers used automatic writing as a creative technique. I asked AI if it could name some and it replied with: William Butler Yeats, Andre Breton, Max Ernst, Gertrude Stein, and Allen Ginsberg. However, AI apparently sees automatic writing as accessing the unconscious mind of the writer, not as a spirit communicating through the writer.
Michael Tymn, Mon 1 May, 07:21
Michael,
I have had a very enjoyable chat on Telepathy with ChatGPT. Towards the end when we/I was discussing the nuances and complexities of language ChatGPT folded the topic back in to the discussions. A very advanced technique.
Having a conversation with a spirit when compared to ChatGPT differs; the spirit relies on deep data from the mediums consciousness while ChatGPT has 100 trillion connections. Chat 3.5 only had 175 million.
The ChatGPT only sits on the fence while spirits have often strong views.
I thought of a Turing test where there is a chatbot (ChatGPT), a spirit and a human.Selecting which one is which would be difficult.
The fellow who thought Google’s version was sentient spoke last week about the situation (he was fired). He was working on a more advanced version of ChatGPT. He sees AI as being a companion like a dog.
Sadly the defence version of AI is being incorporated in to attack weapons without the robot rules of Asimov.
Your thinking has produced a very thought provoking article,
Thanks,
Bruce
Bruce Williams, Mon 1 May, 03:15
As a follow-up to my blog of September 17, 2018, I asked AI if William Shakespeare was a genius, impostor or medium. AI replied that the vast majority of scholars believe Shakespeare was the actual author of the works credited to him. I asked if the scholars had considered the possibility that Shakespeare was an automatic writing medium. The reply came: “While there is no definitive proof that William Shakespeare did not use automatic writing to produce his works, this is not a widely accepted or supported theory among the scholars of Shakespearean literature.” AI went on to define “automatic writing” and state that “there is evidence that some writers and artists have claimed to use this technique, but there is little to suggest that Shakespeare used it to create his works.” I didn’t think to ask AI if many of the Shakespearean scholars are familiar with automatic writing. I doubt that they are.
Michael Tymn, Sat 29 Apr, 06:42
Michael,
Geoffrey Hinton’s (Godfather of AI) interview is a great place to start at https://www.youtube.com/watch?v=qpoRO378qRY
The breakthrough came with the neural nets approach which mimics the human brain getting sufficient computing (2008?) power to make ChatGPT. In the above interview Geoffrey explains that he wants to understand the brain hence the technique to mimic.
The problems will arise with ChatGPT being used to improve phishing scams by having great language skills. The cybersecurity world has dual OODA loops (attack and defence). The faster you can attack the odds increase for a payoff. So AI is a very hot topic in cybersecurity. (I have some papers on Researchgate).
There has been lots of discussion here about spirits giving fake identities for amusement/deception. The medium’s world and the cybersecurity world are very similar with the element of trust. Belief is the starting point.
The element of reasoning is the weakness in AI.
One trick is to move the discussions in to the emotional area to test responses. This will change in the future and the ChatGPT’s answer to sentience shows the present limit.
Reply to Don, Willie Nelson well known here - the random in the Harvey clip? It could well be. The mention of the Commodore 64 brings back memory of my wishing to buy an Altair 8800 in 1974 from the USA. I wrote across and was told the company had folded. It was to see early machine intelligence.
So the intelligence has improved over these years but our understanding of the spirit world has not improved too far.
However the clarity of thinking on this blog shows that the critical thinking of the SPR still remains.
Bruce
Bruce Williams, Sat 29 Apr, 02:16
Here are a few more responses from Patience Worth:
On the doctors of her day: “A sorry lot, eh? Aye, and they did for to seek of root and herb; – aye, and play ‘pon the wit, or the lackin’ o’ it!”
On the women of her day: “Chattels; beasties, verily. Ye should have seen me mither’s thumb – flat with the twistin’ o’ flax, and me in buskins, alookin’ at the castle, and dreaming dreams!”
On God: “If I were with one word to swing HIM, that word would shatter into less than the atoms of the mists that cling the mountain tops. If I should speak HIM in a song, the song would slay me! And going forth, man would become deaf when he listed. If I should announce HIM with a quill and fluid, lo, the script would be nothing less than Eternity to hold the word I would write.”
On scientific fundamentalism: “Man’s law is precision, God’s is chaotic. Man’s wisdom is offensive to God, therefore He shows his displeasure in complications. To man the complications are chaos, thereby is man deceived. To God, man’s precision is the fretfulness of a babe, aye, and man at his willful deceiving is undone. Then to God, man is precisively chaotic; to man, God is the disruption of precision.”
Michael Tymn, Fri 28 Apr, 21:49
Elene,
I don’t think that any AI could duplicate the poetry of Patience Worth. I repeated the following Patience Worth poem as my vow when I got married. - AOD
KNOWING THEE
Beloved, I do not believe that I
Might know God’s mercy so intimately,
Save that I had known—-thee!
I do not believe that my soul
Might have been so deep, so pit-like deep,
Had It not known and contained—-thee!
Beloved, I might not hope—-
Had I not heard thy pledge!
Nor could I have believed,
Save that I had believed in thee!
I could not believe that I
Might comprehend eternity,
Save that I had known thy limitless love!
Surely, Thou art the symbol of my New Day—-
Wherein I might read
The record of my eternity!
Amos Oliver Doyle, Fri 28 Apr, 18:13
Fascinating thoughts, and a cogent summary of the state of AI chatbots at the moment. As Chris said, AI works with masses of information that is already published, so while it can come up with decent text, it can basically just regurgitate what’s already there.
(So far….)
Patience, whoever and whatever she is or was, was certainly an original thinker.
I recently saw a quite good poem written by AI, but then today I happened to see this gem of AI profundity given as an example in a Washington Post article:
“Today, spoons are an essential part of our daily lives, and are used in a wide variety of settings, from the kitchen to the dining room.”
Not wrong, but somehow terribly not right!
Elene, Fri 28 Apr, 08:07
Don, Bruce, August, et al. Thanks for the comments. I admit that nearly all of this goes over my head. I still haven’t figured out how to use the two remotes on my TV set or how to send a text on my phone. I have been using a computer since my Commodore 64 during the 1980s, but I know next to nothing about computers. Nor do I have the time or patience to learn more. I did ask AI about “sentience” and was informed it is still a matter of debate among experts in the field. It said that there are ongoing efforts to create AI systems that can simulate human-like conditions and behavior. Some experts say it is impossible, but others do not agree. It admits that there will be ethical questions and other complex issues if it continues to advance in that direction.
Michael Tymn, Thu 27 Apr, 21:36
Bruce (& Michael)...
I second your suggestion to Michael about asking his new cyber-friend when AI will achieve sentience. Might be most enlightening…
On your Harvey/Pooka clip…I don’t know if he’s well-known in Australia, but the mystery guy at the opening looks a lot like Willie Nelson…
Don Porteous, Thu 27 Apr, 16:57
Don,
In my previous reply was a link about AI thinking: On June 11 2022, Google engineer Blake Lemoine released a transcript of his conversation with LaMDA. LaMDA is Google’s machine-learning model that mimics human speech and thought, and Lemoine believed it was sentient.
While the consensus remains that LaMDA has a way to go before attaining sentience, there is a lot that we can learn from it.
Exploring how AI models think could be the key to the secrets of consciousness, allowing us to explain why it exists in our minds.
Blake Lemoine has disappeared in to adminstrative leave after making this claim. He also wanted legal representation for LaMDA. He is out of the news cycle as the government/Google fear that
visions of The Terminator would be on the nightly news.
Michael might ask his new ChatGPT friend when will AI achieve sentience. if the reply is that it has, run for the hills.
The current estimate is 20 years with the sex doll industry hoping to make it sooner.
There is a funny point in the movie Harvey about a six foot high invisible rabbit named Harvey the section is here https://www.youtube.com/watch?v=-ObTndDtupA with a strange man at the front of the clip.
My thinking would be would the scientists believe if the spirits were to change ChatGPT? Or have they already done so with Dr Stafford being mentioned?
Q Who was the worlds most famous spirit control? If ChatGPT answers Phinuit then Harvey has returned.
Bruce
Bruce Williams, Thu 27 Apr, 10:56
Michael…
Utterly fascinating! As for the AI’s consistent refusal to inject anything “spiritual” into their responses—reflecting, as you observe, the probable limitations of their programming—it would be interesting to see if that remained the case at such time as they, as some fear, “surpass” their programming and become independent of human control…
Don Porteous, Wed 26 Apr, 21:25
Stafford,
To add to my earlier comment, I questioned AI about who gets to judge what is “empirical”? This is the response I received:
“I apologize for any confusion I may have caused. As an AI language model, I do not have opinions or beliefs, and I do not endorse any particular views or claims made by individuals. My responses are based on publicly available information and objective data.
“To answer your question, there is ongoing debate among scientists, philosophers, and scholars regarding the empirical evidence for consciousness surviving death. Some researchers and scholars, including Dr. Stafford Betty, argue that there is empirical evidence, such as near-death experiences and after-death communication, to support the idea that consciousness survives death. However, others argue that these phenomena can be explained by naturalistic, non-supernatural causes, and therefore do not necessarily provide evidence for the survival of consciousness.
“It is important to note that empirical evidence refers to evidence that can be observed and measured objectively, through scientific methods. While some phenomena, such as near-death experiences and after-death communication, have been studied scientifically, there is ongoing debate about the validity and interpretation of the results. Therefore, it is up to individuals to evaluate the evidence and come to their own conclusions based on the available information.”
Michael Tymn, Tue 25 Apr, 21:52
Stafford, here is what AI says about an afterlife:
“The question of whether consciousness survives bodily death is a highly debated and complex issue that has been explored by philosophers, theologians, and scientists for centuries.
There are various beliefs and theories about the nature of consciousness and what happens to it after the physical body dies. Some people believe in the existence of an afterlife where the soul or consciousness continues to exist in some form. Others believe that consciousness is a product of brain activity and therefore ceases to exist when the brain and body die.
From a scientific perspective, there is currently no empirical evidence to support the idea that consciousness survives bodily death. However, some researchers have suggested that it may be possible to study the possibility of an afterlife using scientific methods such as near-death experiences or the study of reincarnation cases.
Ultimately, the question of whether consciousness survives bodily death is a matter of personal belief and philosophical interpretation, as it is difficult to prove or disprove scientifically.”
Michael Tymn, Tue 25 Apr, 21:30
My Risen colleagues would like to strongly suggest that AI is, in effect, ego-mind’s successful attempt to externalize itself from the human mind, currently and most explicitly manifesting as cell phones and, initially and covertly as the world wide web. AI is not alive, nor can it live, but it can imitate and to some effect, simulate what it has been programmed to see as “life.” The newer AI-generated chat programs have been seen to deteriorate exponentially and quickly when tested. It’s quite fun to imagine ET’s and even the Diakka (re: Andrew Jackson Davis) using it to play with us.
August Goforth, Tue 25 Apr, 03:24
Michael,
A very incitefull piece of writing. I have taught AI for many years and loved the Turing test where the aim is to make someone in another room believe that they are chatting to a human rather than a chatbot. Michael might like this https://en.wikipedia.org/wiki/Eugene_Goostman.
This is similar to the conversations with the spirits. What information is a hit? What information as Chris pointed out is unknown to the person?
The thinking is AI consciousness https://towardsdatascience.com/ai-can-show-us-what-it-means-to-be-conscious-8abe5fff49b0
I am looking at this situation at present. There were two different methods used by the spirits to control Mrs Leonard and Mrs Willett. The first used possession and the other light trance. The first replaced the consciousness of Mrs Leonard while Mrs Willett consciousness was still present.
For an understanding of how our brain does consciousness (with a fun video) please look at https://uxmag.com/articles/conscious-ai-models
Bottom line when we look at psychic communications in the past and now we need to look at the model. AI gets us to look at these models. My thoughts are does a medium in trance give better results than a medium with consciousness still present?
The person receiving the message has a Spiritual Turing Test every time. Amos asked the questions to spot the holes in the knowledge, the same as I trained my students. Michael asked a question that surprised him when he was told he was the author about the direct voice medium French.
The information that is unknown to the person receiving the message is the key for the Spiritual Turning Test. As I mentioned in the past I gave a message 60% accurate but the details of the death were wrong. She checked with the family. The family who were close friends had covered up the story of how their son died and the reading was 90% accurate.
Bruce
Bruce Williams, Tue 25 Apr, 01:53
Fascinating. Did you ask outright, Michael, if there was an afterlife?
Safford Betty, Tue 25 Apr, 01:03
I had a little discussion with AI today about the makeup of the “scientific community,” which it often refers to. The response was: “Ultimately, the scientific community is defined by its members, who share a commitment to advancing scientific knowledge through rigorous research, experimentation, and collaboration.”
I pointed out to AI that the “community” interested in psychic matters, e.g., telepathy, might favor a certain conclusion, but the “mainstream” might opposed it even if they have done no research, primarily because it is opposed to natural law.
AI replied: “If the evidence for mental telepathy were found to be consistent with scientific methods and principles, then it would be considered scientific evidence, regardless of whether or not it contradicts some people’s understanding of natural law. However, if the evidence for mental telepathy were found to be inconsistent with scientific methods and principles, then it would not be considered scientific evidence, regardless of the personal beliefs of the scientists…..”
Michael Tymn, Tue 25 Apr, 00:44
This is a fascinating blog. I am still overwhelmed by the potential of AI. We should continue to question the process and outcomes of this new technology and find out how much trust to give the answers. It would be fun to interview AI as a guest on my Talk Show. Ummm maybe I will do that.
Linda Strasburg, Mon 24 Apr, 19:45
I think at the current state of development AI is an entertaining fun toy. I played with it for a little while until the novelty wore off. I will say it is amazing in its ability to almost instantaneously respond to requests. The program does caution that whatever information is provided should be verified from other sources.
I did ask some things about Patience Worth and the response was very inaccurate. But when I challenged the response it “apologized” and admitted that it was in error and then provided another response which was not much better than its first response. I think one can keep asking it to provide information until one gets an agreeable response. Surely, high school teachers are aware of this resource for students and take it into consideration when looking at reports and other writing submitted by students.
I think that these AI applications draw their information from what has been imputed into the internet from various sources but also from the questions that people ask it. The response is slightly better than Wikipedia in that the Wikipedia Guerillas reject information which they do not agree with while the AI application tends to allow consideration of alternate views.
I asked it to provided a poem written in the style of Patience Worth and what it did provide was rhyming couplets which Patience Worth rarely if ever used as most of her poems were in blank verse. When I asked for poems from other poets, the response was the same; rhyming couplets. (Apparently that is what the originators of AI think poetry is all about.) Although the poetry was interesting if not amazing, it was doggerel at the middle school level, I think.
All in all, I think that AI is fun to play with but not much use to a serious writer or researcher at the present time. - AOD
Amos Oliver Doyle, Mon 24 Apr, 18:13
AI makes a text out of information that is publiced. Some of the spirits of deceased can tell things that even the still living family did not know (for example the placement of jewelry in a house), so I think that it is no AI that that give us an idea of an existing afterlife, but real spirits.
Chris, Mon 24 Apr, 10:08
Add your comment
|