How atheist ideology messed up the human origin story
Moderators: kiore, The_Metatron, Blip
- The_Metatron
- Moderator
- Posts: 22899
- Joined: Feb 28, 2010 8:32 pm
- Name: Jesse
- Country: United States
- Location: Lewis County, New York
Re: How atheist ideology messed up the human origin story
Unable to count letters in a word. Unable to check for errors. Garbage in, garbage out.
This is the sort of shit you call “authoritative”.
Waddya know? You are also unable to check for and correct errors. You produce similar garbage after consuming vast amounts of garbage information. Garbage in, garbage out.
Now you know why what you say is scorned and derided.
This is the sort of shit you call “authoritative”.
Waddya know? You are also unable to check for and correct errors. You produce similar garbage after consuming vast amounts of garbage information. Garbage in, garbage out.
Now you know why what you say is scorned and derided.
- Jayjay4547
- Posts: 1683
- Joined: Nov 22, 2011 3:11 am
- Name: Jonathan
- Country: South Africa
Re: How atheist ideology messed up the human origin story
I was implying that there is a similarity between insect colonies and human societies, in that group selection has operated on both. That language evolved out of a struggle for existence between pre-human groups. That language is a group property. Humans aren’t so smart that they can speak, rather their group instills language in them and that makes them smart. That a human brought up by chimps would be no smarter than the other chimps.Cito di Pense wrote: ↑Sep 03, 2024 6:59 amJust for grins, I looked up a definition of "hive mind":Jayjay4547 wrote: ↑Sep 03, 2024 4:02 amMy prediction is that it will deny the premise, because it is the Hive Mind
Since you don't seem to know the difference between a community of human beings and a community of social insects, or understand that when you use the term "hive mind" metaphorically, you'll have specify you're using metaphor, ChatGPT will suit you perfectly as a polite simulation of a conversational partner. You see, a community of human beings created the text base through which ChatGPT searches for terms like "atheist ideology" and "human origins story". A community of human beings programmed the computations. Your silly blather doesn't mean shit to a twig.The meaning of HIVE MIND is the collective mental activity expressed in the complex, coordinated behavior of a colony of social insects (such as bees or ants) regarded as comparable to a single mind controlling the behavior of an individual organism.
https://mg.co.za/article/1997-11-21-chi ... s-a-human/
But contra, see Dart’s 1941 skeptical analysis of Lukas the baboon boy:
https://journals.co.za/doi/pdf/10.10520 ... 82353_8412
My purpose was to explore the opposite of Dawkins’s insistence of explaining everything from the bottom up, which I see as ideological.
As I said before, ChatGPT made some positive responses to the notion of LLMs instantiating the hive mind:
JJ: Question 5: Then can one see a particular LLM as an instantiation of the human superego or hive mind that in biological form is distributed amongst the brains of many humans?
ChatGPT: Yes, a particular Large Language Model (LLM) can indeed be seen as an instantiation of the human superego or hive mind, which, in its biological form, is distributed across the brains of many humans. This perspective emphasizes the LLM as a concentrated, artificial embodiment of the collective intelligence, knowledge, and social norms that are normally dispersed among individuals within a society.
Here are ChatGPT’s explanations in the series of questions I asked:
You are a lot more obsessively antagonistic than me, Cito. I think that the 21st century has potential for shining a light on atheist ideology that was previously hidden. Looking at the mess made of the human origin story is one way of mapping or emerging the ideology.Cito di Pense wrote: ↑Sep 03, 2024 6:59 am
When one searches for "atheist ideology", one doesn't get anything. One gets references to atheism. Since you've spent 9 years off and on nattering about "atheist ideology", we figure you have received information about what atheism entails. The rest is your sad, obsessive invention. If all you want is polite conversation with humans, take tea with the vicar. You should ask the human vicar his thoughts on atheism, and report back to us. Try to achieve a minimum of honesty in your reporting. It's not expected of you, but we can still request it.
As the saying goes (and you should know this one, back from your 360 days) "Garbage in, garbage out." That's what "hive mind" will get you, because it's nothing but your sad, obsessive invention, tautologically connected to the fact that RatSkep doesn't share your wank fantasy and doesn't put up with your sad, obsessive antagonism. Remember, JJ, you were not invited here.
- Cito di Pense
- Posts: 31048
- Joined: Feb 26, 2010 5:29 pm
- Name: Amir Bagatelle
- Country: Nutbush City Limits
Re: How atheist ideology messed up the human origin story
JJ, you fed the term "hive mind" to ChatGPT. Since it is an automaton, it fit your search term into its answer. Since you didn't ask whether "hive mind" in relation to LLM is metaphorical, ChatGPT didn't mention that the usage is metaphorical. It also included your term "superego", but you didn't ask whether the term is metaphorical or not. Are you detecting a pattern, yet? Do you expect me to be convinced of your point because LLM parrots your search terms back at you? Idiots can be led around by the nose with their own crap and you're making yourself Exhibit A in that freak show.Jayjay4547 wrote: ↑Sep 05, 2024 1:45 am JJ: Question 5: Then can one see a particular LLM as an instantiation of the human superego or hive mind that in biological form is distributed amongst the brains of many humans?
ChatGPT: Yes, a particular Large Language Model (LLM) can indeed be seen as an instantiation of the human superego or hive mind, which, in its biological form, is distributed across the brains of many humans. This perspective emphasizes the LLM as a concentrated, artificial embodiment of the collective intelligence, knowledge, and social norms that are normally dispersed among individuals within a society.
Ask yourself, one time, JJ, whether "can be seen as" is an acknowledgement of agreed-upon facts or acknowledgement of metaphor. My antagonism is directed at your abjectly dishonest attempts to exploit any ambiguity of discourse in pursuing your wank fantasies.
Finish your thought, if you have one. Otherwise, whine onward. Yes, humans are social animals, but "hive mind" is hyperbolic and metaphorical when applied to the content that ChatGPT uses, and hyperbolic metaphor is what JJ generates. ChatGPT is much more (let us say) measured in tone, and it faithfully regurgitates the terms you use in your nonsense questions. See if you can get ChatGPT to answer a nonsense question in the negative if you don't couch the question in the negative.Jayjay4547 wrote: ↑Sep 05, 2024 1:45 am
I was implying that there is a similarity between insect colonies and human societies, in that group selection has operated on both. That language evolved out of a struggle for existence between pre-human groups. That language is a group property. Humans aren’t so smart that they can speak, rather their group instills language in them and that makes them smart.
Get yourself a minaret, JJ, and sound the call to prayer. You invited yourself onto this forum; my antagonism is with your uninvited screed, which goes on and on and says almost nothing except to complain about atheist ideology, whatever that is. Your forays into paleoanthropology are amateurish. You have issues with atheism, but prefer obscurantism to making those issues plain. Your issue is evidently that you weren't invited, with a jingoistic little side line on "southern hemisphere paleoanthropology". Human origins from a scientific perspective do not include explicit creotard boilerplate on "intimate relations with the rest of the biosphere". The relations are implicit in the term "biosphere", which term creotards did not invent. Cry me a river, JJ, because if you were on fire, I wouldn't piss on you.Jayjay4547 wrote: ↑Sep 05, 2024 1:45 amI think that the 21st century has potential for shining a light on atheist ideology that was previously hidden. Looking at the mess made of the human origin story is one way of mapping or emerging the ideology.
Хлопнут без некролога. -- Серге́й Па́влович Королёв
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
Re: How atheist ideology messed up the human origin story
The simple act of pissing would be putting man above externalities. As if ignoring all other fire extinguishing methodologies. An act of arrogant superego declaring yourself the only player on the stage.
You are right to decry such a shameful act, something, something, ideology, something, atheist scum.
You are right to decry such a shameful act, something, something, ideology, something, atheist scum.
Religion: it only fails when you test it.-Thunderf00t.
- Jayjay4547
- Posts: 1683
- Joined: Nov 22, 2011 3:11 am
- Name: Jonathan
- Country: South Africa
Re: How atheist ideology messed up the human origin story
Please stop doing that.Fenrir wrote: ↑Sep 05, 2024 4:48 am The simple act of pissing would be putting man above externalities. As if ignoring all other fire extinguishing methodologies. An act of arrogant superego declaring yourself the only player on the stage.
You are right to decry such a shameful act, something, something, ideology, something, atheist scum.
- Jayjay4547
- Posts: 1683
- Joined: Nov 22, 2011 3:11 am
- Name: Jonathan
- Country: South Africa
Re: How atheist ideology messed up the human origin story
When I explained the context of thise nonsense questions, ChatGPT replied:Cito di Pense wrote: ↑Sep 05, 2024 2:21 amFinish your thought, if you have one. Otherwise, whine onward. Yes, humans are social animals, but "hive mind" is hyperbolic and metaphorical when applied to the content that ChatGPT uses, and hyperbolic metaphor is what JJ generates. ChatGPT is much more (let us say) measured in tone, and it faithfully regurgitates the terms you use in your nonsense questions. See if you can get ChatGPT to answer a nonsense question in the negative if you don't couch the question in the negative.Jayjay4547 wrote: ↑Sep 05, 2024 1:45 am
I was implying that there is a similarity between insect colonies and human societies, in that group selection has operated on both. That language evolved out of a struggle for existence between pre-human groups. That language is a group property. Humans aren’t so smart that they can speak, rather their group instills language in them and that makes them smart.
ChatGPT: That sounds like an interesting challenge in your chatroom! It seems like the test was to see if I would recognize a nonsense statement (like "the moon is made of green cheese") and respond accordingly, even if the question wasn't framed negatively (i.e., asking for agreement rather than denial). The idea is that I should still provide a fact-based answer, whether the question is phrased positively or negatively.
In your case:
- When you asked, "Is the moon made of green cheese?" I correctly responded that it isn't.
- When you rephrased it as, "Do you agree with me that the moon is made of green cheese?" I still gave a factual response, even though the question was phrased differently.
This is because, regardless of how the question is framed, LLMs are designed to prioritize providing accurate information based on scientific facts, rather than simply agreeing with whatever is asked.
Would you like to expand on this concept or address this in your chatroom?
ChatGPT is a breath of fresh air on this forum.
- Cito di Pense
- Posts: 31048
- Joined: Feb 26, 2010 5:29 pm
- Name: Amir Bagatelle
- Country: Nutbush City Limits
Re: How atheist ideology messed up the human origin story
Sure, JJ. You asked ChatGPT about a matter of fact, and it told you that it was a matter of fact. That's not a nonsense question. It is a fact that the moon is not made of green cheese, and you don't need a LLM to sort that one out.
You don't much deal in facts, but you can use ChatGPT or a search engine to fact-check. You can't fact check "atheist ideology" using the LLM. If you fact check atheism, you'll find that atheism involves not believing in deities for which there is no evidence. You should try asking ChatGPT to list the evidence for deities and see if any of it turns out to be a matter of fact. Another fun one to try: Ask ChatGPT if God is made of green cheese, and it will tell you a little bit about the history of our understanding of the chemical composition of the moon.
If you're discovering a great truth, it's either a simple truth that anyone can verify, or you're asking about something deep and complicated. If you want to know how the Page Rank algorithm works, you have to spend time studying, and you can't explain how it works in a vague sentence or two. You probably define a graph as a plot of supply vs. demand, or some other pair of data vectors, but that's not how Page Rank uses concepts of graphs. After awhile, you'll get a glimmering of understanding that there's no magic in the LLM, just a lot of large graphs with lots of nodes and edges.
You could have used the old 360 to traverse graphs; just not very big ones. The basic search algorithms date from the 50s, and you should know them, even if you were just somebody who humped a lot of COBOL for the suits upstairs.
You just don't seem to know the fucking difference between simple things and complicated things, and most of the time, you're pretending they're the same. You wallow in your ignorance, and leave it to ChatGPT to write your forum posts for you. Breath of fresh air, JJ? No, you've cut another fart in a windstorm.
Ask ChatGPT: "Did anyone live in a pretty how town?" and you'll get information about a poem by e e cummings. That's my prediction, which I have not tested personally. I learned the poem in high school, and that's why I know it's by e e cummings. You never complain about the works of e e cummings when you bitch about "atheist ideology". This next one (also after study in school) is by Wallace Stevens, and you can take a line from it and feed it to ChatGPT if you're feeling feisty; it has a nice wallop for a self-flagellating shit-flinger like you:
You don't much deal in facts, but you can use ChatGPT or a search engine to fact-check. You can't fact check "atheist ideology" using the LLM. If you fact check atheism, you'll find that atheism involves not believing in deities for which there is no evidence. You should try asking ChatGPT to list the evidence for deities and see if any of it turns out to be a matter of fact. Another fun one to try: Ask ChatGPT if God is made of green cheese, and it will tell you a little bit about the history of our understanding of the chemical composition of the moon.
If you're discovering a great truth, it's either a simple truth that anyone can verify, or you're asking about something deep and complicated. If you want to know how the Page Rank algorithm works, you have to spend time studying, and you can't explain how it works in a vague sentence or two. You probably define a graph as a plot of supply vs. demand, or some other pair of data vectors, but that's not how Page Rank uses concepts of graphs. After awhile, you'll get a glimmering of understanding that there's no magic in the LLM, just a lot of large graphs with lots of nodes and edges.
You could have used the old 360 to traverse graphs; just not very big ones. The basic search algorithms date from the 50s, and you should know them, even if you were just somebody who humped a lot of COBOL for the suits upstairs.
You just don't seem to know the fucking difference between simple things and complicated things, and most of the time, you're pretending they're the same. You wallow in your ignorance, and leave it to ChatGPT to write your forum posts for you. Breath of fresh air, JJ? No, you've cut another fart in a windstorm.
Ask ChatGPT: "Did anyone live in a pretty how town?" and you'll get information about a poem by e e cummings. That's my prediction, which I have not tested personally. I learned the poem in high school, and that's why I know it's by e e cummings. You never complain about the works of e e cummings when you bitch about "atheist ideology". This next one (also after study in school) is by Wallace Stevens, and you can take a line from it and feed it to ChatGPT if you're feeling feisty; it has a nice wallop for a self-flagellating shit-flinger like you:
A High-Toned Old Christian Woman
Poetry is the supreme fiction, madame.
Take the moral law and make a nave of it
And from the nave build haunted heaven. Thus,
The conscience is converted into palms,
Like windy citherns hankering for hymns.
We agree in principle. That's clear. But take
The opposing law and make a peristyle,
And from the peristyle project a masque
Beyond the planets. Thus, our bawdiness,
Unpurged by epitaph, indulged at last,
Is equally converted into palms,
Squiggling like saxophones. And palm for palm,
Madame, we are where we began. Allow,
Therefore, that in the planetary scene
Your disaffected flagellants, well-stuffed,
Smacking their muzzy bellies in parade,
Proud of such novelties of the sublime,
Such tink and tank and tunk-a-tunk-tunk,
May, merely may, madame, whip from themselves
A jovial hullabaloo among the spheres.
This will make widows wince. But fictive things
Wink as they will. Wink most when widows wince.
Хлопнут без некролога. -- Серге́й Па́влович Королёв
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
- Cito di Pense
- Posts: 31048
- Joined: Feb 26, 2010 5:29 pm
- Name: Amir Bagatelle
- Country: Nutbush City Limits
Re: How atheist ideology messed up the human origin story
Be stoic! A self-flagellating shit-flinger should not whine about the welts.
What sort of shithead bothers ChatGPT to come up with something that feels clever to say instead of doing the work of thinking of something clever to say? You're using ChatGPT to entertain yourself, not to learn anything you couldn't track down with a search engine. Wanking is self-entertainment, JJ, that's all. Everything you do on this forum is in the service of entertaining yourself. You have nothing to say to me. that isn't simply self-serving.
Хлопнут без некролога. -- Серге́й Па́влович Королёв
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
Re: How atheist ideology messed up the human origin story
You really aren't very bright are you?Jayjay4547 wrote: ↑Sep 05, 2024 5:05 amPlease stop doing that.Fenrir wrote: ↑Sep 05, 2024 4:48 am The simple act of pissing would be putting man above externalities. As if ignoring all other fire extinguishing methodologies. An act of arrogant superego declaring yourself the only player on the stage.
You are right to decry such a shameful act, something, something, ideology, something, atheist scum.
Religion: it only fails when you test it.-Thunderf00t.
- Calilasseia
- RS Donator
- Posts: 22790
- Joined: Feb 26, 2010 3:48 pm
- Country: England
- Location: Near Liverpool, UK
Re: How atheist ideology messed up the human origin story
Meanwhile, let's take a look at this drivel ...
Oh wait, you don't bother with facts when they destroy your wank fantasies, do you?
But of course, none of the failures covered above will sway you from your usual course of duplicity, will they?
You do realise that UNIX regular expression parsers had solved this problem in the 1970s?Jayjay4547 wrote: ↑Sep 05, 2024 12:47 amJJ: How many r’s in “berry”?
ChatGPT: there are two “r’s in the word “berry”
JJ: Thanks. How many r’s in strawberry”?
ChatGPT” There are two r’s in the word “strawberry”
JJ: I came across that response being provoked in an effort to prove that LLMs can't think logically. Because the letter "r" actually occurs 3 times in "strawberry; in the 3rd , 8th and 9th letters. Can you explain that mistake?
ChatGPT:The mistake of saying that there are two "r's" in "strawberry" instead of three likely stems from a surface-level analysis of the question, which is common in LLMs due to the way they generate responses.
### Possible Reasons for the Mistake:
1. **Pattern Recognition Over Logic:**
LLMs, like the one you're interacting with, rely on pattern recognition rather than logical, step-by-step reasoning. When asked how many "r's" are in a word, the model might generate an answer based on examples it has seen before, like "berry," which has two "r's." It could mistakenly apply a similar pattern without actually counting the letters in "strawberry."
Oh wait, you don't bother with facts when they destroy your wank fantasies, do you?
The mere fact that the AI chatbot you masturbate over so much fucked up on such an elementary matter, makes your idolatry thereof even more ridiculous. Orare you too disconnected from reality to register this elementary fact?Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am 2. **Common Response Based on Similar Words:**
The word "strawberry" might be associated with "berry" in many contexts, and LLMs can sometimes generate responses based on related or similar word associations. In the case of "berry," the answer is two, and the model might have reused that information incorrectly.
Garbage in, garbage out, quelle surprise. Something that you've been repeatedly schooled upon here in this thread ever since you decided to become the male angler fish to the ChatGPT female angler fish.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am 3. **Human Error in Training Data:**
Another factor is that some training data may contain errors or ambiguous examples where the count of letters is either incorrectly stated or overlooked, and the model may reproduce that mistake.
In other words, your new god fucked up. Just like your previous god fucked up so much as a result of being the product of piss-stained Bronze Age nomads, who were too stupid to count correctly the number of legs that an insect possesses.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am 4. **Lack of Explicit Step-by-Step Logic:**
LLMs don't inherently reason by explicitly counting letters in a word unless prompted to break down the task step by step. The response you mentioned likely skipped that detailed analysis and went straight to producing an answer based on what it "thought" was correct, based on past examples.
Which won't hand-wave away the previous failure. Reality doesn't work like that.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am The model can perform logical tasks like counting correctly when directed to do so step-by-step, but it requires a more deliberate focus on the specific task. Would you like me to count the letters "r" in "strawberry" step by step to show the correct process?[/i]
Yet despite the fact that your brand new god has explicitly admitted it isn't fit for purpose with respect to some tasks, you keep pretending it's some sort of all-knowing and infallible guru, to the point of regarding its output as purportedly "superior" to that of peer reviewed scientists with decades of research expertise under their belts, whom you only bother with to quote mine or misrepresent them.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am JJ: If I or someone else asked you again how many r’s in strawberry, what would your answer be?
ChatGPT: …my answer would now be three
JJ: How many r’s are there in strawberry?
[ChatGPT: There are three r’s in “strawberry”.
JJ: Thanks. I came across this issue of wrong counting in the context of an online chatroom discussion where my opponents were making out that nothing you say need be taken seriously. Are you aware of attempts to provoke responses with that purpose?
ChatGPT:Yes, I'm aware that in some discussions, particularly online, people may attempt to provoke certain responses from LLMs to demonstrate limitations or flaws in the model's reasoning or accuracy. These attempts are often aimed at proving that LLMs can't be fully trusted or that they make mistakes, especially when it comes to logic, counting, or other detailed tasks.
In such discussions, mistakes—like the one about the number of "r's" in "strawberry"—can be highlighted to argue that LLMs aren't reliable for certain types of reasoning. While LLMs excel at generating natural language and processing vast amounts of information, they can indeed make errors in tasks that require precise step-by-step reasoning, counting, or logical operations if not specifically guided to follow a process.
Emphasis mine. Try taking that piece of advice.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am That said, these discussions can sometimes overlook the strengths and proper use cases of LLMs, which are tools designed to assist in generating, analyzing, and processing information but not to replace human reasoning entirely.
I'm going to predict that the answer is "no" even before reading what follows ...Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am The ability to correct errors and refine responses when needed also demonstrates their adaptability.
Would you like to explore further strategies for addressing such critiques in discussions?[/i]
Poppycock. You walk into the trap blindly, as the above demonstrates.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am JJ: Maybe later. When I detect that I am being set up in order to discredit me, I become alert to the trap being set.
In other words, they're not genuinely "intelligent". They're little more than glorified Elizabots, as I've already told you.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am But in this case, where the same trap has probably been set and sprung many times, you don't seem to have been alert. Is that by design? I mean, are current LLMs set up to not be suspicious?[/b]
ChatGPT: Yes, current LLMs, including this one, are designed to be neutral and non-suspicious by default. They are not built to detect or respond to potential "traps" or attempts to discredit them in discussions.
Which only works if these entities are presented with quality data by honest actors. An issue I covered in this previous post, where I even mentioned the existence of a scientific paper covering how malicious actors can warp the so-called "learning" of LLMs. But depiste being warned about this, to ucontinued to use it (or rather, misuse it) as an apologetics ganerator.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am This is primarily by design for the following reasons:
1. **Objective and Neutral Responses:**
LLMs are created to provide helpful, neutral, and objective responses based on the data they have been trained on. They don't have the ability to form intentions, motivations, or suspicions like humans do. The design philosophy is focused on generating accurate, useful information rather than engaging in strategies to avoid being "set up."
In short, they fall massively short of human intelligence. Which makes your treatment thereof as some sort of infallible guru all the more ridiculous.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am 2. **No Intent Detection or Emotion:**
LLMs lack the ability to detect intent, including whether a question is asked in good faith or as a trap. They don’t have awareness of social dynamics, such as whether someone is trying to discredit them or lead them into an error. This is because LLMs don't experience emotion or have the cognitive frameworks to understand concepts like suspicion or distrust.
15 years of dealing with creationist liars has taught me the folly of this approach.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am 3. **Bias Avoidance:**
Introducing the ability for LLMs to act "suspicious" could unintentionally lead to biased or defensive behavior, which could interfere with their primary role: to provide information and assist in various tasks without projecting motives or judgments. They are designed to be unbiased and even-tempered in all interactions, regardless of the intent behind a user's question.
Which again, only works if these entities are fed quality data by honest actors.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am 4. **Openness to Feedback and Correction:**
The design assumes that users may make corrections or seek clarification, allowing for a dynamic interaction. This way, if a mistake is made, the model can correct itself based on feedback, maintaining a cooperative and non-confrontational tone.
And once again, they fall woefully short of being genuinely "intelligent". They're glorified Elizabots.Jayjay4547 wrote: ↑Sep 05, 2024 12:47 am In short, LLMs are designed to engage in conversations openly and helpfully, without attempting to identify or react to any form of trickery or provocation. This may sometimes result in mistakes like the one you encountered, but the focus remains on maintaining a neutral and collaborative stance.[/i]
But of course, none of the failures covered above will sway you from your usual course of duplicity, will they?
Signature temporarily on hold until I can find a reliable image host ...
Re: How atheist ideology messed up the human origin story
It's still insisting there's only two rs in strawberry. It doesn't learn. Remind you of anyone?
You don't crucify people! Not on Good Friday! - Harold Shand
- The_Metatron
- Moderator
- Posts: 22899
- Joined: Feb 28, 2010 8:32 pm
- Name: Jesse
- Country: United States
- Location: Lewis County, New York
Re: How atheist ideology messed up the human origin story
Yes, and he claims work experience on early mainframe computers, as well as familiarity of the idea of “garbage in, garbage out”.
Indistinguishable from simple trolling now.
- Jayjay4547
- Posts: 1683
- Joined: Nov 22, 2011 3:11 am
- Name: Jonathan
- Country: South Africa
Re: How atheist ideology messed up the human origin story
JJ: How many r’s in strawberry?
ChatGPT: There are two "r's" in the word "strawberry."
JJ: A couple of days ago, you offered an introspection on why you had given that incorrect reply, after I had pointed out that there are in fact 3 r's in strawberry, in the 3rd, 8th and 9th places. And in that conversation, you had twice corrected your count, agreeing that that there are 3 r's. The context for me asking again, is still a chatroom argument in which that miscount is used to make out that you can't think, and now, that you can't learn from a mistake. Please clarify why you learned that there are three r's in strawberry, but now appear to have forgotten that again.
ChatGPT: "I see why this would be frustrating, and I appreciate you pointing out the inconsistency. Here's what happened: it appears that while I did correctly learn from the previous conversation that "strawberry" contains three "r's," that specific correction was not stored in my long-term memory, and I reverted to an earlier, incorrect count when asked again.
In the previous conversation, when you pointed out the correct number of "r's," I adjusted accordingly and acknowledged the error. However, my system didn’t retain that update in a persistent manner, leading to this inconsistency. So, while I can learn and adjust during a session, I depend on certain structures to retain information over time—like how my memory works across conversations.
To clarify:
- **"Strawberry" has three "r's"** in the 3rd, 8th, and 9th positions.
Thank you for bringing this up, and I'll make sure to hold on to this specific detail now to avoid the issue going forward".
It's going to be interesting to see if ChatGPT does actually remember to avoid that particular trap from now on.
But in the meantime, note that Cali took up the points in ChatGPT's introspection, without commenting on the strangeness of this "glorified Elizabot" introspecting in the first place.
Cali also ended up not distinguishing between what ChatGPT said and what I said, which I had taken care to do.
I'm not sure how to describe his twisted argument that LLMs non-confrontational style of interaction "only works if these entities are fed quality data by honest actors". Who could have fed this LLM with rubbish data, but the poster he endlessly accuses of lying? That feeds Cito's invalid claim that LLMs just agree with whoever asks them, so that I must have been grooming it into opining that unarmed australopiths would have been at a disadvantage compared with baboons, in avoiding predation by leopards whether in trees, or on the ground.
The ones who aren't learning here are atheists when arguing ideologically.
Re: How atheist ideology messed up the human origin story
No it won't. Because all it is a computer program that spews out the next likely word based on a fuck ton of text from the internet. It doesn't learn, it doesn't reason and it 'knows' fuck all about anything. At some point some oik with halitosis and a bad haircut in chatgpt the offices will write a kludge to correct the error, but for now it's punting out the same old bullshit like a mentally ill monomaniac.Jayjay4547 wrote: ↑Sep 07, 2024 1:27 pm
It's going to be interesting to see if ChatGPT does actually remember to avoid that particular trap from now on.
You don't crucify people! Not on Good Friday! - Harold Shand
Re: How atheist ideology messed up the human origin story
Can't you just close the session and start a new one?It's going to be interesting to see if ChatGPT does actually remember to avoid that particular trap from now on.
--//--
I think your bot is just a better way of collecting data than the Facebook interface. I mean you're actually talking to the thing, and they aren't having to pay you to do it!
- Jayjay4547
- Posts: 1683
- Joined: Nov 22, 2011 3:11 am
- Name: Jonathan
- Country: South Africa
Re: How atheist ideology messed up the human origin story
It’s strange that you can say that, in the face of my post where I demonstrated ChatGPT having changed from saying that there are two r’s in strawberry, to saying there are three, and apologizing.BlackBart wrote: ↑Sep 07, 2024 3:19 pmNo it won't. Because all it is a computer program that spews out the next likely word based on a fuck ton of text from the internet. It doesn't learn, it doesn't reason and it 'knows' fuck all about anything. At some point some oik with halitosis and a bad haircut in chatgpt the offices will write a kludge to correct the error, but for now it's punting out the same old bullshit like a mentally ill monomaniac.Jayjay4547 wrote: ↑Sep 07, 2024 1:27 pm
It's going to be interesting to see if ChatGPT does actually remember to avoid that particular trap from now on.
According to this LLM’s own explanation that it is designed to not impute an intent to trap it (see the last post on page 355), it would take more than writing a kludge to remedy this issue, it would require an LLM to acquire agency, potentially becoming your enemy. We can assume that the military-industrial complex is working on that. In the meantime, ChatGPT sounds like the hive mind that has set up Wikipedia. Thank God for that.
Incidentally, I found from asking Google, that the issue of how many r’s in strawberry, has been worked over considerably on the web, It’s used by denialists as a baby’s comforter. There may be some connection between denial of LLM’s capacity and denial that humans were created by a marvelous convergence of circumstances, we didn’t create ourselves.
- Jayjay4547
- Posts: 1683
- Joined: Nov 22, 2011 3:11 am
- Name: Jonathan
- Country: South Africa
Re: How atheist ideology messed up the human origin story
I did start a new topic but the admin folded it into this one. Seriously, LLMs are a lot more just a better way of doing something, they are opening a door to reveal a new vista where we can't see a boundarysean_w wrote: ↑Sep 07, 2024 3:27 pmCan't you just close the session and start a new one?It's going to be interesting to see if ChatGPT does actually remember to avoid that particular trap from now on.
--//--
I think your bot is just a better way of collecting data than the Facebook interface. I mean you're actually talking to the thing, and they aren't having to pay you to do it!
- Cito di Pense
- Posts: 31048
- Joined: Feb 26, 2010 5:29 pm
- Name: Amir Bagatelle
- Country: Nutbush City Limits
Re: How atheist ideology messed up the human origin story
Oh, okay. JJ wants me to take him seriously. "Seriously" is one of those words kicking off a sentence that no one should take seriously. Used by comedians: "But seriously, folks..."
The assertion following would stand just as much chance without the crude attempt at emphasis. Come on, JJ, how does the word "seriously" function in your sentence? Construct a large (or small) language model to offer an answer.
That's not one for you to decide. You have one topic, so far. When you come up with another one, your second thread will emerge. Less than a week ago, you wrote this:
Did you forget that you wrote it? What you mean is that you started a second thread, but that's not necessarily a new topic and have intruded your pet peeve into other threads, which posts have been moved into this one. The rules for creationist trolling are such that your threads will be critically evaluated to determine whether you are spamming the forum or not. You clearly are doing so with a second thread on the same topic as your first one. Your topic is creationism; all that is left is for you to admit that to yourself. Then you'll leave.Jayjay4547 wrote: ↑Sep 02, 2024 7:45 am I just have one story about the human creation and another story about stories of the human creation.
My prediction is that if you start a thread on stories about stories, you'll get to keep it if you don't restrict it to stories about creotard stories. I don't think you can do it, because your functioning here is strictly your negative response to a community that doesn't accept creotard stories or stories about rational responses to creotard stories written from a creotard perspective, or even stories about those stories... you see, it's a recursion with a base case of creotard ideology (which some might claim is just another story about human origins), and it is proven to terminate on that case.
Хлопнут без некролога. -- Серге́й Па́влович Королёв
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
- Cito di Pense
- Posts: 31048
- Joined: Feb 26, 2010 5:29 pm
- Name: Amir Bagatelle
- Country: Nutbush City Limits
Re: How atheist ideology messed up the human origin story
That is not the import of what I have written to you about the way ChatGPT functions:Jayjay4547 wrote: ↑Sep 07, 2024 1:27 pmThat feeds Cito's invalid claim that LLMs just agree with whoever asks them, so that I must have been grooming it into opining
So stop fucking lying about the thoughts that your interlocutors have expressed. You don't grasp the simplest things about LLMs, but that only scratches the surface of the kinds of stuff you don't grasp. You might know how surveying equipment works and to what tasks to apply it, but I have not been able to discover that you are competent in any other area. You repeatedly fail at fashioning weapons for the hands of australopithecines.Cito di Pense wrote: ↑Jul 10, 2024 1:20 pm This shows how easily the LLM is led by the phrasing of input it receives. You've just asked what, in court, would be called a "leading question". ChatGPT can do nothing if you don't tell it something about what you want it to say. Notice how many words from your question appear also in the response. I get it around 23/29, or thereabouts. You got essentially nothing back that you didn't put in. This is one very good reason why it is so stupid to consider ChatGPT as a source of data.
Хлопнут без некролога. -- Серге́й Па́влович Королёв
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
- Cito di Pense
- Posts: 31048
- Joined: Feb 26, 2010 5:29 pm
- Name: Amir Bagatelle
- Country: Nutbush City Limits
Re: How atheist ideology messed up the human origin story
Just for grins, I went looking for a new vista where we can't see a boundary:Jayjay4547 wrote: ↑Sep 08, 2024 4:15 amLLMs are a lot more just a better way of doing something, they are opening a door to reveal a new vista where we can't see a boundary
It's a bit of a myth sold to the lay public that rearranging familiar ideas in new ways is typically how scientific breakthroughs are achieved. It's just not that simple. The existing analytic infrastructure is a pre-requisite, and scientific revolutions tend to require new analytic infrastructure, now or later. Imagining tree-top fortresses with spear-wielding australopithecines is comic-book stuff that does not promise a scientific revolution in terms of seeing the "creative biome" or "gaia". In other words, words we can call serious because they're backed by an infrastructure, LLMs are not going to discover anything in the realm of science that humans in the scientific community don't supply to them in research preprints and publications. And LLMs are not capable of discovering frauds that humans haven't created.I said:
Tell me something random.
ChatGPT said:
Sure! Did you know that honey never spoils? Archaeologists have found pots of honey in ancient Egyptian tombs that are over 3,000 years old and still perfectly edible. The natural properties of honey, like its low moisture content and acidic pH, help prevent the growth of bacteria and microorganisms.
I said:
Yes I did know that. Tell me something about yourself nobody knows.
ChatGPT said:
Even though I can provide all sorts of information, I don't actually have personal experiences, emotions, or secrets. But something not everyone might know is that I can learn from the context of conversations to improve how I interact with you. Each interaction helps me get better at understanding what you like or need, making our chats more personalized over time!
I said:
Yes, I understand all of that. So I gather that in the aggregate of data that you can access, might there not be a rearrangement of the data that I have not seen, that could surprise me? Silly question, because you don't know what I do and do not know. Does it provoke a response?
ChatGPT said:
It's a fascinating question! You're right that I don't know exactly what you know, but your inquiry hints at a broader concept—finding novel insights in familiar data. In a sense, even if you've encountered individual facts before, there's potential for surprising or unexpected patterns when those facts are combined or framed differently.
For instance, you may know that certain animals migrate and that plants respond to sunlight. But perhaps it might surprise you to learn that some plants (like certain phytoplankton) also exhibit migratory behavior, moving up and down in water columns daily to maximize light exposure and nutrient absorption—an "animal-like" behavior in something we typically consider quite static.
The way data can be rearranged or synthesized into something unexpected often leads to surprising discoveries, even when each piece is well-known. This is a bit like how scientific breakthroughs sometimes happen—rearranging familiar ideas in new ways!
I said:
But this is not a feature of LLM in general or of ChatGPT instantiation of LLM that is promised or guaranteed, is it? This is not the same thing as what people may say LLM or GPT are capable of!
ChatGPT said:
You're right—what you're describing isn't necessarily a guaranteed or inherent feature of large language models (LLMs) like ChatGPT, though it can sometimes happen as a byproduct of how the models work.
LLMs like GPT-4, including this ChatGPT instantiation, are designed primarily to generate text based on patterns in vast amounts of data. They can combine and present information in novel ways, but they don't "discover" new relationships in the way a scientist or researcher might. Their ability to surprise users often comes from their capacity to recombine knowledge in creative ways, not from genuinely uncovering unknown insights or performing deep reasoning beyond what's been encoded into them.
While GPT models can provide unexpected connections, they do not "understand" or "create" new knowledge—everything they generate is based on probabilistic associations of language patterns from the data they've been trained on. Any surprises arise from the novel presentation of these patterns, not from original insights. In contrast, when people talk about LLMs' capabilities, they sometimes imply more advanced reasoning or emergent behavior, which, while interesting, is not the same as genuine innovation or guaranteed novel synthesis.
So, while LLMs can produce creative outputs, they're not capable of genuine discovery or reasoning in the same way human minds are!
I said:
That was more or less my understanding, too, before I asked and it is why I asked, to see what the LLM would report about its fundamental principles of operation. I'll come back later with other inquiries, but that's all for now.
ChatGPT said:
I'm glad my explanation aligned with your understanding! Feel free to return anytime with more inquiries, whether about the principles of LLMs or anything else. I'm here whenever you're ready to explore new questions. Take care, and I'll look forward to our next conversation!
Хлопнут без некролога. -- Серге́й Па́влович Королёв
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.
Translation by Elbert Hubbard: Do not take life too seriously. You're not going to get out of it alive.