Go Back   Freethought Forum > The Marketplace > Study Hall

Reply
 
Thread Tools Display Modes
  #1  
Old 12-11-2023, 05:54 PM
michio michio is offline
Member
 
Join Date: May 2009
Posts: CXVIII
Default Studying and Education with AI

This thread is for general discussion about how to learn anything while using AI as a tool, also for any discussion about how AI is affecting the education system. Dump any thoughts about those things here.

As of the time of writing this, all of my opinions below are predicated on using GPT4. I have exclusively used GPT4 since March, and it currently blows everything out of the water. None of this applies to GPT3 or other basic models.

Gemini pro is currently powering the Bard chat bot and I recommend using that instead of GPT3. Overall it's better than GPT3, it's free, and the training data is more up-to-date than GPT3.

---

AI's effect on student performance

I don't have any hard data about this, but my speculation is that AI is going to cause good, non-lazy students to accelerate beyond their peers faster than usual, and it's going to cause lazy students to fall behind even further. In an AI world, the gap is widening between good students and bad students.

AI can be a powerful tool for learning, but if used incorrectly, it's just cheating and allowing someone else to do your work for you. Students may also adopt the idea of, "Why do I have to learn anything that an AI can tell me or do for me?" and outright refuse to learn something due to this attitude.

---

AI as Teachers

AI has been a powerful teacher and mentor for me so far. You can ask it to explain something over and over without it becoming annoyed or tired. You can ask it to explain something at different levels of complexity until you understand. You can try to explain something to the AI in your own words, then ask it if what you've said makes any sense.

Random example. Last month I watched some videos from PBS Space Time PBS Space Time - YouTube , which I highly recommend. I tried watching a video about loop quantum gravity, and the guy completely lost me after like 4 minutes into the video, so I asked ChatGPT to explain some concepts to me. It took me a while, but after talking to ChatGPT for a good hour or two, I was finally able to somewhat wrap my head around the core concepts of LQG. If I used google, there's no way I would have understood any of that and I would have given up quickly.

You can ask the AI to do things teachers would normally do. You give it a topic, including a sample of the materials if you'd like, then ask it to generate flashcards, test questions, essay questions, then you attempt these things on your own.

---

Prompt Engineering

I've facetiously criticized "prompt engineering" in the past, as a buzzword made up by grifters or ignorant HR employees that don't understand what AI is, but I'm walking that back here. Both of these things are certainly true, but without getting into a big discussion about it, I recommend learning prompt engineering theory, both to make yourself less vulnerable to white collar layoffs coming in the future, but also to get better responses from LLMs. If you take this even a tiny bit seriously, you are easily in the top 1% of people using AI right now and you'll set yourself apart when the layoffs start hitting white collars in the future.

I recommend ignoring both (1) the people who brush off AI as a fad like crypto, and (2) the people who say you don't need "prompt engineering", but instead you just need to communicate better.

(1) I won't get into that here, good luck to those people is all I can say.

(2) You need both. You need to be a good communicator, but you should also know prompt theory to put yourself in the top 1% of people utilizing LLMs on a regular basis. When I see criticisms of ChatGPT, 95% of the time it ends up being someone who doesn't understand how LLMs work, gave it a terrible prompt, or they're using GPT3 which is trash, often all of the above.

Try asking ChatGPT itself how it works and get it to explain prompt engineering.

There's a weirdly high number of people who will say prompt engineering doesn't exist and is just people with inflated egos selling you something. The former is objectively false, the latter does exist, but the existence of grifters selling you ripoff prompt packages and courses doesn't mean prompt engineering doesn't exist. I don't even know how to respond to this, other than the people saying this are also being arrogant and annoying.

We're in the very early stages of AI, so it's true you could argue prompt engineering is common sense, which I would flat out disagree with, but as AI evolves and people figure out more and more, this will become deeper and more complex with time.

I'm halfway through this prompt engineering course on coursera, and I highly recommend doing it.

---

Checking for understanding

The most common way I've been learning new things with AI is checking for understanding by using AI as a tutor. If I'm learning something, anything, and I'm not sure I understand something, I'll explain what I know in my own words, and ask it if I'm making sense or I've misunderstood something.

I've been brushing up on algorithms, and if I'm working through a challenging problem and I just can't make any progress, I'll ask it to drop me a hint.

---

Useful Tools

Very good prompt engineering course on coursera

AutoExpert - Bots configured with pre-prompts and custom instructions that will make your responses from ChatGPT much better. Must be a plus user and logged in to access AutoExpert. Type /help when it loads up for instructions on how to use.

AutoExpert Chat - Good for general prompts.

AutoExpert Academic - Will help you work with and understand academic papers. Upload a paper and start asking questions.

AutoExpert Dev - For programming.

AutoExpert Video - Great for youtube. Paste in one or more youtube URLs, it will be able to provide a transcript, a summary, it can generate flashcards, a mindmap, and homework questions.

Prompt databases so you can find useful prompts and learn from others.
PromptBase | Prompt Marketplace: Midjourney, ChatGPT, DALL·E, Stable Diffusion & more.
https://prompthero.com/
FlowGPT
Reply With Quote
Thanks, from:
256 colors (02-15-2024), Ensign Steve (12-12-2023), LarsMac (01-12-2024), specious_reasons (12-11-2023), viscousmemories (12-12-2023)
  #2  
Old 12-11-2023, 08:33 PM
mickthinks's Avatar
mickthinks mickthinks is offline
Mr. Condescending Dick Nose
 
Join Date: May 2007
Location: Augsburg
Gender: Male
Posts: VMMDCCCXXIX
Images: 19
Default Re: Studying and Education with AI

I recommend learning prompt engineering theory, ...
I recommend ignoring ... the people who say you don't need "prompt engineering", ...
There's a weirdly high number of people who will say prompt engineering doesn't exist and is just people with inflated egos selling you something. ...


I'm halfway through this [prompt engineering course] on coursera, and I highly recommend doing it.


...aaaand there you have it! "michio" is just a series of AI generated sales pitches.
__________________
... it's just an idea
Reply With Quote
Thanks, from:
256 colors (02-15-2024), LarsMac (01-12-2024)
  #3  
Old 12-11-2023, 08:56 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Images: 18
Default Re: Studying and Education with AI

Quote:
Originally Posted by mickthinks View Post
...aaaand there you have it! "michio" is just a series of AI generated sales pitches.
Not so. The words "It's important to note that" don't appear.
__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
  #4  
Old 12-11-2023, 09:10 PM
michio michio is offline
Member
 
Join Date: May 2009
Posts: CXVIII
Default Re: Studying and Education with AI

Quote:
Originally Posted by mickthinks View Post
I recommend learning prompt engineering theory, ...
I recommend ignoring ... the people who say you don't need "prompt engineering", ...
There's a weirdly high number of people who will say prompt engineering doesn't exist and is just people with inflated egos selling you something. ...


I'm halfway through this [prompt engineering course] on coursera, and I highly recommend doing it.


...aaaand there you have it! "michio" is just a series of AI generated sales pitches.
I forgot to add in the prompt to make the pitch sound less AI-generated.

I am wondering what exactly classrooms look like now. Would be interesting to talk a high school student about it.

It does appear that initially schools just outright banned it, but after discussions over the summer and observing how fast AI was developing, they're accepting it as inevitable, so they have to work around it.

The worst teachers in waning numbers probably just forbid it outright and accuse students of cheating. It's completely obvious sometimes, but it's also quite easy to prompt the AI to make it sound more natural. You can also provide it a sample of your writing and ask to write the text in that tone and cadence. The final defense is to literally quote the terms of service agreement with these AI detectors where they themselves admit their AI detection isn't actually reliable, aka these are bullshit and don't work.
Reply With Quote
Thanks, from:
JoeP (12-12-2023)
  #5  
Old 12-12-2023, 08:21 AM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Images: 18
Default Re: Studying and Education with AI

+1 for including a proxy link for nyt.
__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
Ensign Steve (12-12-2023)
  #6  
Old 01-12-2024, 10:20 PM
michio michio is offline
Member
 
Join Date: May 2009
Posts: CXVIII
Default Re: Studying and Education with AI

Here's some thoughts I've had regarding using AI in education.

I've been trying and failing to develop a GPT bot that will guide a student through the learning process in anything. The first step to developing such a thing is to clearly define some nebulous terms and more concretely define the goal. The fundamental question is what do we mean by "learn"? What does it mean to learn and understand something? This is a really deep question, and we can go back through 3000 years of philosophical discussion to answer it. Instead of doing that right now which might take a long time, we can indirectly answer that question by looking at respected educational frameworks in the modern day.

I actually just asked ChatGPT what it means to understand something and had a short discussion with it. I then asked it for some respected, contemporary educational frameworks for learning and it told me to check out Bloom's Taxonomy.

I'm not an educator. I don't know any theory behind educational paradigms, and I'm not well read in epistemology and cognitive psychology. So instead of having me butcher this and pretend I know what I'm talking about, let's have ChatGPT explain it.

Quote:
Bloom's Taxonomy, a framework developed by Benjamin Bloom in 1956, is a classification system used to define and distinguish different levels of human cognition—thinking, learning, and understanding. It's widely used in education to guide curriculum development, instructional methods, and assessment strategies.

<cut out some excess explanation>

Revised Taxonomy (2001):

Remembering: Retrieving, recognizing, and recalling relevant knowledge from long-term memory.

Understanding: Constructing meaning from oral, written, and graphic messages.

Applying: Carrying out or using a procedure through executing, or implementing.

Analyzing: Breaking material into constituent parts, determining how the parts relate to one another and to an overall structure or purpose.

Evaluating: Making judgments based on criteria and standards through checking and critiquing.

Creating: Putting elements together to form a coherent or functional whole; reorganizing elements into a new pattern or structure.

<cut out some excess explanation>
I had a long conversation with ChatGPT beyond this, to quickly recap, it suggested there's something explicitly missing from the original, basic framework of Bloom's taxonomy, which is metacognition.

Quote:
Metacognitive Knowledge: This is the highest level of knowledge and refers to the awareness and understanding of one’s own thought processes. It involves self-awareness about one's cognitive abilities, understanding of one's own learning style and strategies, and knowledge about when and how to use particular strategies for learning or problem-solving.

<cut out some additional info>

Implications for Bloom's Taxonomy:

Complementary Relationship: Metacognition complements Bloom’s Taxonomy by adding a layer of self-awareness and regulation over the cognitive processes outlined in the taxonomy.

Dynamic Learning Process: It emphasizes the dynamic and fluid nature of learning, suggesting that cognitive development is not just about acquiring and applying knowledge but also about understanding and regulating one’s own learning process.

In summary, metacognition is intricately linked with Bloom's Taxonomy, primarily interacting with the higher-order thinking skills but also playing a role across all levels. It extends the taxonomy by introducing elements of self-regulation, planning, monitoring, and reflection, thereby enriching the educational framework and emphasizing the importance of self-awareness in learning.
Cool, so we have a good framework for learning anything here: Bloom's taxonomy + metacognition. Bringing all this back to the original point of the thread, how can AI help us in learning anything?

When you're studying something alone, and you have no access to a tutor or teacher, you're at a disadvantage. The 2-way interaction between the student and the teacher is invaluable. If you have access to AI, your ability to learn suddenly becomes much more engaging and effective, because it can fill in that role.

When I get help from AI in solving any coding problem, and when a solution is finally reached, I always like to finish out the interaction by doing a retrospective where I recap what we just did, anything I learned, why something worked or didn't work, why I didn't understand something and what the key ideas were. When I tell ChatGPT all of this, it will double-check the logical and factual content of what I'm saying, and often clarifies and extends anything I just said. It may correct me, and maybe another "loop" happens where I said something incorrect or illogical, so again we do another recap, maybe I have to go look at something again. ChatGPT can help me identify where I'm failing to grasp a key idea or key connection. ChatGPT can help me make a connection to something else entirely, even in another subject, and I can ask it if I'm really stretching or the connection is actually elucidating and enlightening.

I wanted to provide some prompts, but it's kind of hard and it depends on the subject matter. I would say one prompt pattern you can apply to absolutely everything, is to explain something in your own words, then ask ChatGPT if what you're saying is logical, factual, and makes sense. I often say something like, "I think I'm not understanding this because _____." or "I think I was finally able to understand _____ because I needed to _____." then ChatGPT will help me clarify my own thinking or correct something illogical or incorrect that I'm saying. This prompt is ridiculously powerful because it helps you understand and reflect on the way you think and learn.

This is metacognitive part. When you go back and think about the way you think and have a third party analyze that with you, this is how you really, really learn something at a deep level.

There's some positive socioeconomic implications to all of this. Not everyone can afford good education, not everyone can hire a private tutor for 40+ USD / hour. The cost of universities is fucking outrageous in 2024. A lot more people can afford a cheap computer that can run a web browser and a $20 / month subscription to ChatGPT. Assuming you're using it correctly, I would go so far as to say that ChatGPT is going to be a much better tutor than some people charging by the hour for the privilege.
Reply With Quote
Thanks, from:
256 colors (02-15-2024), Ensign Steve (01-16-2024)
  #7  
Old 01-12-2024, 10:39 PM
michio michio is offline
Member
 
Join Date: May 2009
Posts: CXVIII
Default Re: Studying and Education with AI

Honestly when I asked ChatGPT to give me a framework for learning anything, I liked its answer better than asking it to explain Bloom's taxonomy to me. It's obviously pulling from the original Bloom's taxonomy, but then it made it better and also incorporated metacognition as a key aspect.

Quote:
Understanding, in the context of learning and cognition, can be dissected into several key elements:

Comprehension: This is the foundational level of understanding. It involves grasping the meaning of information or an idea. Comprehension is not just about memorizing facts; it's about internalizing the basic concept so that it can be recalled and explained in simple terms.

Application: Once a concept is comprehended, the next level of understanding is applying that knowledge. This involves using the information in new situations or contexts. For example, in mathematics, understanding a formula isn't just about knowing what it is, but also about being able to use it in solving problems.

Analysis: This involves breaking down information into its constituent parts and understanding the relationship between these parts. Analysis is essential for complex subjects where understanding the whole requires an understanding of its components.

Synthesis: This is the process of combining various pieces of knowledge or information to form a new whole. This level of understanding signifies an ability to think critically and creatively, using known information to generate new ideas or solutions.

Evaluation: The highest level of understanding involves making judgments about the value of information or ideas. This includes assessing the credibility of sources, the validity of arguments, and the applicability of solutions to real-world problems.

Metacognition: This refers to an awareness and understanding of one's own thought processes. It's a higher-order thinking skill where individuals can monitor, control, and evaluate their learning and understanding.

For your educational framework, incorporating these elements will be crucial. The system should not only present information but also guide users through these stages of understanding. This might involve interactive elements that test comprehension, application exercises, analytical tasks, creative problem-solving challenges, and opportunities for critical evaluation.

Moreover, integrating metacognitive strategies will help users become more aware of their learning processes, which is key to self-directed learning and long-term retention. These strategies might include reflective questions, prompts to summarize what has been learned, or tools to plan and evaluate learning goals.

In summary, understanding in a learning context is a dynamic and multi-layered process. It starts with comprehension and progresses through application, analysis, synthesis, and evaluation, with metacognition playing an overarching role.
It's obvious but worth repeating that nobody can learn for you. If you're lazy or you just don't care about the subject, the best you can do is regurgitate some facts that you're going to immediately forget about when the test is over.

With AI now part of the educational landscape, I'm a little pessimistic about American education in particular. Just reflecting on my time in grade school, most kids don't give a single fuck about anything they're learning at school. The goal is to turn the homework or test in, regurgitating whatever is necessary to get the grade, then immediately removing anything about that from their brain. Without a culture of learning where knowledge is respected and valued, AI will probably just make kids even lazier and more nihilistic than they already are. I remember kids saying things like, "Why do I have to learn math when the calculator can do everything?" Now we have ChatGPT, a seemingly all-knowing oracle, that can instantly answer whatever question you give it, kids are probably wondering why they should have to learn anything.

Additionally there's some unique factors among Gen Z and Gen Alpha that make this even worse. There's a lot of nihilism and despair among these generations because they feel powerless and disenfranchised due to a bleak outlook on their future. Why put effort into anything when their future is dying in the climate wars or AI automating them into homelessness?
Reply With Quote
Thanks, from:
256 colors (02-15-2024), Ensign Steve (01-16-2024), JoeP (01-12-2024)
  #8  
Old 02-15-2024, 02:05 PM
256 colors 256 colors is offline
Karma is Rael
 
Join Date: Oct 2007
Location: Paradise Park
Gender: Bender
Posts: DCX
Default Re: Studying and Education with AI

Oh gosh, things are gonna get worse, if people don't realize that AI has limitations. I don't know. I mean, I know it's useful in some ways, such as coding, and enhancing details of images. But you're right to be concerned about future reliance on AI as being explanatory.

I wish there were a way to make sure that folks know that despite the marketing, AI's not intelligent, it's just a big bot that spits out different verions of scraped text in reply to code.

I like big bots, and I cannot lie. Chat bots were a hobby of mine. When a bot comes in with a Markov chain or newbies try to reply in vain, I get ... sprung? Maybe I've coded some?

ANYWAY. The LLM is just a scrape. It isn't "reasoning." (*cough*) Idk, I am a few years behind in reading things.

https://www.newyorker.com/tech/annal...peg-of-the-web

Annals of Artificial Intelligence
ChatGPT Is a Blurry JPEG of the Web
OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?
By Ted Chiang

February 9, 2023
Reply With Quote
Thanks, from:
Ari (02-15-2024), Ensign Steve (02-15-2024), JoeP (02-15-2024), Sock Puppet (02-15-2024)
  #9  
Old 02-15-2024, 02:20 PM
256 colors 256 colors is offline
Karma is Rael
 
Join Date: Oct 2007
Location: Paradise Park
Gender: Bender
Posts: DCX
Default Re: Studying and Education with AI

horse_ebooks walked so ChatGPT could run - Album on Imgur

"It's all just a Dead Horse Beatery."
Attached Images
File Type: jpg horse e books walked so chatgpt could run.jpg (23.2 KB, 0 views)
Reply With Quote
  #10  
Old 02-15-2024, 04:07 PM
Ensign Steve's Avatar
Ensign Steve Ensign Steve is offline
California Sober
 
Join Date: Jul 2004
Location: Silicon Valley
Gender: Bender
Posts: XXXMMCCCXLVI
Images: 66
Default Re: Studying and Education with AI

Quote:
Originally Posted by 256 colors View Post
I like big bots, and I cannot lie. Chat bots were a hobby of mine. When a bot comes in with a Markov chain or newbies try to reply in vain, I get ... sprung? Maybe I've coded some?
:thanked:

Quote:
ANYWAY. The LLM is just a scrape. It isn't "reasoning."
The same could be said for google, and people seem generally happy to just "google it". I think when google results are wrong or weird, it doesn't have the same creepiness factor as chat gpt, maybe because you can still tell it's just a computer program? This gen AI stuff is getting us into this uncanny valley territory where it's too close to seeming like a real person, and way too confident and sincere-sounding for as wrong/dumb as it is.

Quote:
OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?
Yeah, something like that. By the time it's gone through the LLM machine, the original scraped material has been chewed up and shat to the point where you're like, how much of this is real and how much is the chatbot editorializing and/or halluciating.

Sorry if my metaphors are no bueno this morning, I'm still waking up. :coffeeff:
__________________
:kiwf::smurf:
Reply With Quote
Thanks, from:
Crumb (02-15-2024), JoeP (02-15-2024)
  #11  
Old 02-15-2024, 04:23 PM
Ari's Avatar
Ari Ari is offline
I read some of your foolish scree, then just skimmed the rest.
 
Join Date: Jan 2005
Location: Bay Area
Gender: Male
Posts: XMCMLVII
Blog Entries: 8
Default Re: Studying and Education with AI

Quote:
Originally Posted by Ensign Steve View Post
The same could be said for google, and people seem generally happy to just "google it". I think when google results are wrong or weird, it doesn't have the same creepiness factor as chat gpt, maybe because you can still tell it's just a computer program? This gen AI stuff is getting us into this uncanny valley territory where it's too close to seeming like a real person, and way too confident and sincere-sounding for as wrong/dumb as it is.
That article, which is quite good BTW, gets into this a bit. It likens ChatGPT3 to a type of lossy JPG compression but for words. In the case of JPGs a lot of color information gets dumped because with detailed enough lights and darks our brain will rein in colors to their respected areas, but if you actually want to zoom in far enough, or make adjustments you'll find what seems correct are colored blobs in a basic approximation of the original.

New YorkerImagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.

A lot of uses have been proposed for large language models. Thinking about them as blurry JPEGs offers a way to evaluate what they might or might not be well suited for. Let’s consider a few scenarios.

Can large language models take the place of traditional search engines? For us to have confidence in them, we would need to know that they haven’t been fed propaganda and conspiracy theories—we’d need to know that the JPEG is capturing the right sections of the Web. But, even if a large language model includes only the information we want, there’s still the matter of blurriness. There’s a type of blurriness that is acceptable, which is the re-stating of information in different words. Then there’s the blurriness of outright fabrication, which we consider unacceptable when we’re looking for facts. It’s not clear that it’s technically possible to retain the acceptable kind of blurriness while eliminating the unacceptable kind, but I expect that we’ll find out in the near future.
Reply With Quote
Thanks, from:
256 colors (02-16-2024), Crumb (02-15-2024), Ensign Steve (02-15-2024), fragment (02-16-2024), JoeP (02-15-2024), mickthinks (02-15-2024), Sock Puppet (02-15-2024)
  #12  
Old 02-16-2024, 07:27 AM
seebs seebs is offline
God Made Me A Skeptic
 
Join Date: Jul 2004
Location: Minnesota
Posts: VMMMCLXXI
Images: 1
Default Re: Studying and Education with AI

Still very impressed by the example of asking one of these things which is heavier, a pound of feathers or two pounds of bricks, and getting a very confident explanation that the answer is that they weigh the same.

Because it "knows" that the answer to "which is heavier, feathers or bricks" is "they weigh the same".
__________________
Hear me / and if I close my mind in fear / please pry it open
See me / and if my face becomes sincere / beware
Hold me / and when I start to come undone / stitch me together
Save me / and when you see me strut / remind me of what left this outlaw torn
Reply With Quote
Thanks, from:
256 colors (02-16-2024), Ari (02-16-2024), Crumb (02-16-2024), Ensign Steve (02-16-2024)
  #13  
Old 02-16-2024, 06:34 PM
davidm's Avatar
davidm davidm is offline
Spiffiest wanger
 
Join Date: Jul 2004
Posts: MXCLXXXIV
Blog Entries: 3
Laugh Re: Studying and Education with AI

LOL @ more “AI” in “education”
Reply With Quote
Thanks, from:
256 colors (02-17-2024)
  #14  
Old 02-16-2024, 11:36 PM
fragment's Avatar
fragment fragment is offline
mesospheric bore
 
Join Date: Jul 2005
Location: New Zealand
Gender: Male
Posts: VMD
Blog Entries: 8
Images: 143
Default Re: Studying and Education with AI

Of course, people still use less sophisticated ways of making shit up for papers:

No data? No problem! Undisclosed tinkering in Excel behind economics paper – Retraction Watch

Quote:
The reader, a PhD student in economics, was working with the same data described in the paper. He knew they were riddled with holes – sometimes big ones: For several countries, observations for some of the variables the study tracked were completely absent...

... Heshmati told the student he had used Excel’s autofill function to mend the data. He had marked anywhere from two to four observations before or after the missing values and dragged the selected cells down or up, depending on the case. The program then filled in the blanks. If the new numbers turned negative, Heshmati replaced them with the last positive value Excel had spit out.
:facepalm:
__________________
Avatar source CC BY-SA
Reply With Quote
Thanks, from:
256 colors (02-17-2024), Ari (02-17-2024), viscousmemories (02-17-2024)
  #15  
Old 02-17-2024, 01:56 AM
256 colors 256 colors is offline
Karma is Rael
 
Join Date: Oct 2007
Location: Paradise Park
Gender: Bender
Posts: DCX
Default Re: Studying and Education with AI

Air Canada tried to say that its customer service chatbot was a "separate legal entity." Canada court disagreed.

https://arstechnica.com/tech-policy/...lines-chatbot/

Quote:
Air Canada must honor refund policy invented by airline’s chatbot

Air Canada appears to have quietly killed its costly chatbot support.

ASHLEY BELANGER - 2/16/2024, 12:12 PM

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.

On the day Jake Moffatt's grandmother died, Moffat immediately visited Air Canada's website to book a flight from Vancouver to Toronto. Unsure of how Air Canada's bereavement rates worked, Moffatt asked Air Canada's chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada's policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot's advice and request a refund but was shocked that the request was rejected.

Moffatt tried for months to convince Air Canada that a refund was owed, sharing a screenshot from the chatbot that clearly claimed:

Quote:
If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.
Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.

Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada's Civil Resolution Tribunal.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions," a court order said.

Experts told the Vancouver Sun that Moffatt's case appeared to be the first time a Canadian company tried to argue that it wasn't liable for information provided by its chatbot.

Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada's defense "remarkable."

"Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot," Rivers wrote. "It does not explain why it believes that is the case" or "why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot."

Further, Rivers found that Moffatt had "no reason" to believe that one part of Air Canada's website would be accurate and another would not.

Air Canada "does not explain why customers should have to double-check information found in one part of its website on another part of its website," Rivers wrote.

In the end, Rivers ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) off the original fare (about $482 USD), which was $1,640.36 CAD (about $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt's tribunal fees.

Air Canada told Ars it will comply with the ruling and considers the matter closed.
How "closed," exactly?

Quote:
Air Canada’s chatbot appears to be disabled

When Ars visited Air Canada's website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot.

Air Canada did not respond to Ars' request to confirm whether the chatbot is still part of the airline's online support offerings.

Last March, Air Canada's chief information officer Mel Crocker told the Globe and Mail that the airline had launched the chatbot as an AI "experiment."

Initially, the chatbot was used to lighten the load on Air Canada's call center when flights experienced unexpected delays or cancellations.

... Over time, Crocker said, Air Canada hoped the chatbot would "gain the ability to resolve even more complex customer service issues," with the airline's ultimate goal to automate every service that did not require a "human touch."

If Air Canada can use "technology to solve something that can be automated, we will do that,” Crocker said.

Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that "Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries." It was worth it, Crocker said, because "the airline believes investing in automation and machine learning technology will lower its expenses" and "fundamentally" create "a better customer experience."

It's now clear that for at least one person, the chatbot created a more frustrating customer experience.
Reply With Quote
Thanks, from:
Ari (02-17-2024), Crumb (02-17-2024)
  #16  
Old 02-17-2024, 02:12 AM
Ari's Avatar
Ari Ari is offline
I read some of your foolish scree, then just skimmed the rest.
 
Join Date: Jan 2005
Location: Bay Area
Gender: Male
Posts: XMCMLVII
Blog Entries: 8
Default Re: Studying and Education with AI

It's especially amusing that they argue that the bot should be allowed to lie as if that's not clearly an end run around other regulations. Sure they can't run a sign that says free blackjack and hookers, but if the bot just happens to suggest such a thing to get people to book a flight while also linking to the fine print that says otherwise, well that's just out of our hands.
Reply With Quote
Thanks, from:
256 colors (02-17-2024), Crumb (02-17-2024), JoeP (02-17-2024)
  #17  
Old 02-18-2024, 08:12 PM
Ari's Avatar
Ari Ari is offline
I read some of your foolish scree, then just skimmed the rest.
 
Join Date: Jan 2005
Location: Bay Area
Gender: Male
Posts: XMCMLVII
Blog Entries: 8
Default Re: Studying and Education with AI

Similar to when someone described a GANs as a mathematical function aproximator I've had the concept that Transformers are just a type of lossy compression bouncing around in my head so much I had dreams about my dreams being compressed by a GANs that would make them rememberable only as a prompt.

While I know a lot of it is click bait, I've seen enough "Can you guess if this is real" posts related to the new Sora videos that I have to wonder just how much people pay attention to the videos they watch and if the eventual result of all this is to have artists focus in on things that require detail and let the AI fudge the rest, because yes I can absolutely tell which ones are AI for the specific reason that at no point in reality has a cat taken a step by having its backlegs morph into its front.

Of course the reply is 'wait till it gets better' and while true, it doesn't need to be 'better' it needs to be perfect to be used as a main character, as at no point as any animal ever walked in that manner, and while it could get away with it as a background character, it's about as realism breaking as having a character clip through a 3d modeled floor. All that being said I think a lot of background, quick, 'good enough to move to the next scene' effects are going to get way more realistic.

(Also yes I'm apparently just going to bounce back and forth between the multiple Neural Network threads).
Reply With Quote
Thanks, from:
256 colors (02-20-2024), Crumb (02-19-2024), JoeP (02-19-2024)
Reply

  Freethought Forum > The Marketplace > Study Hall


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

 

All times are GMT +1. The time now is 05:42 PM.


Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Page generated in 0.70693 seconds with 15 queries