#header-inner img {width: 900px; margin: 0 auto;} #header-inner {text-align: center;}

Future of AI

News on Artificial Intelligence in Education and Libraries

Saturday, July 20, 2024

Instruction Pretraining LLMs

A lot has happened last month: Apple announced the integration of on-device LLMs, Nvidia shared their large Nemotron model, FlashAttention-3 was announced, Google's Gemma 2 came out, and much more.  You've probably already read about it all in various news outlets.

ORIGINAL LINK: https://magazine.sebastianraschka.com/p/instruction-pretraining-llms

Friday, July 19, 2024

This Week in AI with Reed Hepler and Steve Hargadon (July 19, 2024)


We've released our newest "This Week in AI" recording, back on Fridays. Hope you enjoy! AI summary provide by summarize.tech: https://www.summarize.tech/youtu.be/nm0yXzk9HF8.


00:00:00 - 00:40:00

In the July 19, 2024 episode of "This Week in AI," hosts Steve Hargadon and Reed Hepler discussed recent developments in AI, including the release of Gpt-4 Mini, a smaller and faster version of the popular AI model, and the state of AI in education. They also explored the influence of AI on culture and the potential manipulation of large language models, the current state of AI and its relationship to human skills, and the fragility and potential dangers of over-reliance on technology. The hosts emphasized the importance of using AI as a collaborative tool, focusing on specific productivity tools rather than grand promises, and maintaining realistic expectations. They also touched upon the ethical considerations of creating deep fakes and the potential risks of centralized control and technology failures.

  • 00:00:00 In this section of the "This Week in AI" video from July 19, 2024, hosts Steve Hargadon and Reed Hepler discuss recent developments in the field of AI. First, they discuss the release of Gpt-4 Mini, a smaller and faster version of the popular AI model, which is designed for use in business applications and apps. The Mini version is less cumbersome to use and is being made available online for people to use. However, its effectiveness is still being debated, with some users reporting that it can answer questions effectively but may forget the last steps of complex prompts. The hosts also mention the release of a survey on the state of AI in education, which showed that only 46% of teachers and 36% of students believe that AI will be helpful in education. Despite this, 56% of educators plan to be more deliberate and pragmatic in their use of AI. The hosts suggest that people may not be using AI effectively or productively due to a lack of understanding of how to use it effectively.
  • 00:05:00 In this section of the "This Week in AI" video from 19 July 2024, the hosts Reed Hepler and Steve Hargadon discuss the use and perception of AI in education. Hepler mentions a survey by Quizlet, an Edtech company, which found that half of their audience doesn't use AI at all, and those who do often use it minimally. Hargadon shares another study where students using a specialized GBT tutor performed better than those using a regular chatbot model or having no access to AI at all. The hosts agree that the role of AI and how it's perceived shapes its effectiveness. They also emphasize the importance of proper training and framing when using AI in education to avoid unrealistic expectations and misunderstandings.
  • 00:10:00 In this section of the "This Week in AI" YouTube video from July 19, 2024, Steve Hargadon and Reed Hepler discuss the influence of AI on culture and the potential manipulation of large language models. Hargadon expresses concern over the shaping of responses by those in power and control, citing examples from China and the United States. He argues that the framing of AI is crucial in education and that people are becoming overly trusting of AI's human-like responses and consciousness. The conversation also touches on the impact of AI on families, with some children developing emotional attachments to AI tools like Alexa. Reed Hepler encourages listeners to read an article by Clance Elliot in Forbes Magazine for further insight into the topic.
  • 00:15:00 In this section of the "This Week in AI" video from July 19, 2024, Reed Hepler and Steve Hargadon discuss the current state of AI and its relationship to human skills. Hepler mentions that some people have reached a trough of disillusionment with AI, but this is only the case if they had unrealistic expectations. Hargadon adds that people are still trying to understand the vast capabilities of AI and that it's essential to recognize its limitations. They also discuss a study that found language models like ChatGPT memorize more than they reason, emphasizing the importance of understanding AI's data-driven nature. The conversation then touches on the human tendency to perceive AI as conscious and accurate, even when it may not be. The episode concludes with news about a representative from Virginia using AI to restore her voice after losing it.
  • 00:20:00 In this section of "This Week in AI - 19 July 2024", hosts Reed Hepler and Steve Hargadon discuss the advancements in AI technology that allow it to recreate a person's voice and speaking style with remarkable accuracy. They share an example of someone's speech being generated in Tucker Carlson's voice and posted on TikTok, which went unnoticed by most commenters. The hosts ponder the implications of this technology, including the potential for creating deep fakes of deceased loved ones and the ethical considerations of building relationships with AI personalities that mimic real people. They also touch upon the possibility of AI's predictive ability and the potential impact on human relationships.
  • 00:25:00 In this section of the "This Week in AI - 19 July 2024" YouTube video, Reed Hepler and Steve Hargadon discuss OpenAI's alleged roadmap to AGI (Artificial General Intelligence), which includes five levels: chatbots, reasoners, agents, innovators, and organization-wide AI tools. Hepler expresses skepticism about the plausibility of this roadmap. Steve Hargadon adds that OpenAI might be presenting this roadmap to alleviate safety concerns and that the company has a history of making surprising announcements. They also touch upon the potential dangers of a fully reasoning AI, which could expose power structures and manipulation, and the fragility of the electronic universe, including the potential risks of an EMP (Electromagnetic Pulse) that could take out most of the electronics in an area.
  • 00:30:00 In this section of the "This Week in AI" video from July 19, 2024, the hosts Steve Hargadon and Reed Hepler discuss the fragility and potential dangers of over-reliance on technology, specifically AI. They reflect on the impact of technology failures, such as the blue screen of death, which they compare to the Y2K issue. Reed Hepler shares his personal experiences with technology-related screens of death. The conversation then shifts to the risks of centralized control and over-reliance on technology in various industries, including transportation and finance. Steve Hargadon adds that the rapid growth and development of technology, particularly AI and supercomputers, increase the risks and make it challenging to ensure backup systems and prevent potential catastrophic failures. The hosts also touch upon the over-promising of AI capabilities and the importance of realistic expectations.
  • 00:35:00 In this section of the "This Week in AI" video from July 19, 2024, Steve Hargadon discusses the importance of focusing on specific productivity tools rather than grand promises of increased productivity through AI. He uses the example of the Covid-19 pandemic and how it has become integrated into daily life, and compares it to the integration of AI into various tools and applications. Hargadon emphasizes the need to remember that language isn't logic and that humans are ultimately responsible for the output of AI tools. Reed Hepler adds to the conversation by reflecting on the public discourse around Covid-19 and AI, and expressing his belief that AI is a long-term story that will become pervasive in what we do.
  • 00:40:00 In this section of the "This Week in AI" YouTube video from July 19, 2024, hosts Steve Hargadon and Reed Hepler discuss the theme of over-promising and the need for realistic expectations when it comes to AI. They emphasize the importance of using AI as a collaborative tool for research and productivity, while also acknowledging the potential for dependency on the technology. Hargadon uses the example of cars and cell phones to illustrate how humans have adopted and become dependent on technologies that we don't fully understand or have the ability to create ourselves. The hosts conclude by acknowledging that only time will tell if our dependence on AI is the right thing.

Why Americans Believe That Generative AI Such As ChatGPT Has Consciousness

Quick question for you before we get underway on today’s discussion. Are you conscious or exhibiting consciousness, right now, as you read this sentence?

ORIGINAL LINK: https://www.forbes.com/sites/lanceeliot/2024/07/18/why-americans-believe-that-generative-ai-such-as-chatgpt-has-consciousness/

Why Confidence In Your Unique Skills Is Crucial In The Age Of AI

Concerns about the implications of AI aren’t unfounded. Ethical complications, in addition to shifts in the workforce, make AI a hot topic. The technology is already creating new jobs while disrupting other industries, including content creation.

ORIGINAL LINK: https://www.forbes.com/sites/johnhall/2024/07/14/why-confidence-in-your-unique-skills-is-crucial-in-the-age-of-ai/

China is dispatching crack teams of AI interrogators to make sure its corporations' chatbots are upholding 'core socialist values'

If I were to ask you what core values were embodied in western AI, what would you tell me? Unorthodox pizza technique? Annihilating the actually good parts of copyright law? The resurrection of the dead and the life of the world to come?

ORIGINAL LINK: https://www.pcgamer.com/software/ai/china-is-dispatching-crack-teams-of-ai-interrogators-to-make-sure-its-corporations-chatbots-are-upholding-core-socialist-values/

Language models like GPT-4 memorize more than they reason, study finds

A new study reveals that large language models such as GPT-4 perform much worse on counterfactual task variations compared to standard tasks. This suggests that the models often recall memorized solutions instead of truly reasoning.

ORIGINAL LINK: https://the-decoder.com/language-models-like-gpt-4-memorize-more-than-they-reason-study-finds/

OpenAI unveils GPT-4o mini, a small AI model powering ChatGPT

OpenAI introduced GPT-4o mini on Thursday, its latest small AI model. The company says GPT-4o mini, which is cheaper and faster than OpenAI’s current cutting edge AI models, is being… The U.S.

ORIGINAL LINK: https://techcrunch.com/2024/07/18/openai-unveils-gpt-4o-mini-a-small-ai-model-powering-chatgpt/

Wednesday, July 17, 2024

We asked, you answered: nearly 50% of you never use AI at all

AI as a general technology seems to be here to stay. How it will actually play out and affect all aspects of our digital culture are yet to be seen at this point, but I think it is a fair assessment to assume big tech is going to keep betting on it for the foreseeable future.

ORIGINAL LINK: https://chromeunboxed.com/we-asked-you-answered-nearly-50-of-you-never-use-ai-at-all/

A.I. isn’t the thing. It may be the thing that gets us to the thing.

AI isn’t the thing. It’s the thing that gets us to the thing.

ORIGINAL LINK: https://www.ajjuliani.com/blog/ai-isnt-the-thing-its-may-be-the-thing-that-gets-us-to-the-thing

Ex-OpenAI and Tesla engineer Andrej Karpathy announces AI-native school Eureka Labs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More But with generative AI, this type of learning experience feels “tractable,” he noted.

ORIGINAL LINK: https://venturebeat.com/ai/ex-openai-and-tesla-engineer-andrej-karpathy-announces-ai-native-school-eureka-labs/

Ex-OpenAI and Tesla engineer Andrej Karpathy announces AI-native school Eureka Labs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More But with generative AI, this type of learning experience feels “tractable,” he noted.

ORIGINAL LINK: https://venturebeat.com/ai/ex-openai-and-tesla-engineer-andrej-karpathy-announces-ai-native-school-eureka-labs/

Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AI

AI companies are generally secretive about their sources of training data, but an investigation by Proof News found some of the wealthiest AI companies in the world have used material from thousands of YouTube videos to train AI.

ORIGINAL LINK: https://www.proofnews.org/apple-nvidia-anthropic-used-thousands-of-swiped-youtube-videos-to-train-ai/

Tuesday, July 16, 2024

Tone it down a bit

This is a story about two kinds of intelligence: artificial intelligence and emotional intelligence. How often do we hear those two terms together?

ORIGINAL LINK: https://www.understandably.com/p/tone-it-down-a-bit

Intuit's CEO Just Said AI Is the Reason He's Laying Off 1,800 Employees. His Memo Is the Worst I've Seen Yet

I've written a number of times about CEOs sending out an all-company memo detailing how the company is laying off employees. It's not a fun thing to write about, but I do think it's important especially as there are lessons every leader can learn.

ORIGINAL LINK: https://www.inc.com/jason-aten/intuits-ceo-just-said-ai-is-reason-hes-laying-off-1800-employees-his-memo-is-worst-ive-seen-yet.html

Advanced Microsoft AI Voice Cloning Deemed Too Dangerous for Public Use.

WE ARE 100% INDEPENDENT AND READER-FUNDED. FOR A GUARANTEED AD-FREE EXPERIENCE AND TO SUPPORT REAL NEWS, PLEASE SIGN UP HERE, TODAY. Microsoft has developed an advanced artificial intelligence (AI) text-to-speech program that achieves human-like believability.

ORIGINAL LINK: https://thenationalpulse.com/2024/07/15/advanced-microsoft-ai-voice-cloning-deemed-too-dangerous-for-public-use/