#header-inner img {width: 900px; margin: 0 auto;} #header-inner {text-align: center;}

Future of AI

News on Artificial Intelligence in Education and Libraries

Friday, July 26, 2024

This Week in AI with Reed Hepler and Steve Hargadon (July 26, 2024)

00:00:00 - 00:30:00 In the July 26, 2024 episode of "This Week in AI," hosts Reed Hepler and Steve Hargadon discuss the disconnect between experts' predictions about the future of AI and practical preparations of businesses and individuals, emphasizing the importance of focusing on practical applica

ORIGINAL LINK: https://news.futureofai.org/2024/07/this-week-in-ai-with-reed-hepler-and_26.html

This Week in AI with Reed Hepler and Steve Hargadon (July 26, 2024)


We've released our latest "This Week in AI" recording, back on Fridays (except taking next week off). Hope you enjoy! AI summary provide by summarize.tech: https://www.summarize.tech/www.youtube.com/watch?v=35xrNF4xzlA

00:00:00 - 00:30:00

In the July 26, 2024 episode of "This Week in AI," hosts Reed Hepler and Steve Hargadon discuss the disconnect between experts' predictions about the future of AI and practical preparations of businesses and individuals, emphasizing the importance of focusing on practical applications and avoiding unrealistic expectations. They also touch on the challenges of privacy concerns and the need for clear implementation of AI technologies. Hepler shares a personal experience of being stranded due to a computer system error, highlighting the unreliability of technology and potential consequences. The conversation then shifts to the use of custom GPUs in AI and the Human Intelligence for Artificial Intelligence movement. The hosts discuss the latest developments in AI technology, including the integration of search engine capabilities into large language models and the potential implications for productivity and personal relationships. They also explore the possibility of using AI to simulate conversations with deceased individuals or geniuses, raising ethical concerns. The episode concludes with a discussion about the impact of AI on the photo industry and the importance of human creativity and expertise.

  • 00:00:00 In this section of the "This Week in AI" YouTube video, hosts Reed Hepler and Steve Hargadon discuss the implications of Ethan Monic's blog post "Confronting Impossible Futures" for businesses and individuals. The post highlights the disconnect between experts' predictions about the future of AI and the practical preparations of businesses and individuals. Hepler and Hargadon emphasize the importance of focusing on practical applications of AI and avoiding overpromising and unrealistic expectations. They also touch on the topic of scenario planning and the need for businesses to find specific, productive uses for AI. Additionally, they mention the challenges of privacy concerns and the need for clear, effective implementation of AI technologies. The conversation also includes a personal anecdote from Hepler about his experience with a biotech error that prevented him from traveling.
  • 00:05:00 In this section of the "This Week in AI" YouTube video, Reed Hepler shares his personal experience of being stranded at the airport due to a computer system update causing flight cancellations and delays. Hepler highlights the unreliability of technology and the potential cascading implications of simple errors. Steve Hargadon connects this incident to the current rush to profitability in AI, with companies making promises of magical solutions to save time and labor. However, these promises may not always be productive or worth the investment, as seen in the case of Microsoft's search summaries and other visible mistakes. The conversation also touches upon the potential of custom GPUs in AI and their significant capabilities.
  • 00:10:00 In this section of the "This Week in AI" YouTube video, Reed Hepler and Steve Hargadon discuss the various applications and considerations of using custom Gpt models, as well as the importance of human responsibility in the age of AI. Hepler also introduces the Human Intelligence for Artificial Intelligence movement, which aims to prepare students for the workplace and AI-driven world by emphasizing human qualities and abilities. The conversation then shifts to OpenAI's recent announcement of an AI-powered search engine, Search Gpt, which is intended to rival Google and Complexity. The hosts express their thoughts on the implications of this development and the potential impact on the industry.
  • 00:15:00 In this section of the "This Week in AI" podcast with Reed Hepler and Steve Hargadon, they discuss the latest developments in AI technology, specifically the integration of search engine capabilities into large language models like ChatGPT and Claude. Hepler expresses excitement about this advancement as it represents a significant step towards AGI. However, Hargadon raises concerns about the potential shaping of responses by developers and the legal standards that may be imposed on such technology. He also emphasizes the importance of transparency and truthfulness in AI responses. The conversation then shifts to the issue of trust and cognitive shortcuts for decision-making, with Hargadon preferring a plain voice and footnotes for AI responses to avoid building a relationship based on trust. Hepler mentions a blog post he has written on skepticism and AI, inspired by Carl Sagan.
  • 00:20:00 In this section of the "This Week in AI" video, hosts Reed Hepler and Steve Hargadon discuss the potential of using AI to simulate conversations with deceased individuals or geniuses. Hepler brings up Carl Sagan as an example and suggests creating a custom GPT model using Sagan's written material and transcripts to mimic his voice and thinking style. Hargadon shares his experience with creating a similar model using Ivan Illich's material and expresses excitement about the possibility of having conversations with deceased relatives for therapeutic purposes or developing a closer relationship with the AI version. They also acknowledge the ethical concerns and potential for scams in this area.
  • 00:25:00 In this section of the "This Week in AI" video, Reed Hepler and Steve Hargadon discuss the potential implications of advanced AI technology in various aspects of life, including personal relationships and productivity. Steve shares his experience of attending an AI training session with a visual avatar narrator and the productivity boost he gained by using Otter to summarize the sessions. They also touch upon studies suggesting that AI can both promote creativity and lessen productivity, depending on how effectively it's used. Reed introduces the concept of merging AI and human intelligence to enhance existing processes and increase productivity. Steve emphasizes the importance of communication and idea generation in the creative process and how AI can facilitate these aspects by acting as a conversational partner.
  • 00:30:00 In this section of the "This Week in AI" YouTube video, hosts Reed Hepler and Steve Hargadon discuss the impact of AI on the photo industry. Steve argues that understanding how humans work is essential to effectively utilizing AI, and Reed shares an article about Mid Journey 6, a new AI model that is expected to significantly affect the industry. They agree that while AI may replace stock photography, it won't replace human photographers for events like weddings. The conversation also touches on the evolution of photography and its impact on the expertise of photographers over the years. Despite the advancements in AI, the hosts believe that human creativity and expertise will continue to be valuable. The episode concludes with a discussion about skipping the deep dives for the time being and looking forward to future topics.

Wednesday, July 24, 2024

Midjourney 6 Means the End for a Big Chunk of the Photo Industry

Those words — spoken to me by a large independent online publisher — should strike fear into the hearts of anyone in the photography industry. I agree with the publisher. Using Midjourney’s new Version 6 doesn’t initially feel revolutionary.

ORIGINAL LINK: https://medium.com/the-generator/midjourney-6-means-the-end-for-a-big-chunk-of-the-photo-industry-068cb5faeddc

The False Promises of AI

In 1770, the Hungarian author and inventor Wolfgang von Kempelen introduced an automaton chess machine known as The Mechanical Turk. This device showcased its automated chess master skills across Europe, frequently emerging victorious in matches against human opponents.

ORIGINAL LINK: https://medium.com/artificial-corner/the-false-promises-of-ai-fe23124e0fb9

Tuesday, July 23, 2024

How a Virtual Assistant Taught Me to Appreciate Busywork

I recently downloaded a virtual assistant that promised to ease the burdens of modern parenthood. The app is called Yohana, and it offered to handle a pile of tasks on my behalf.

ORIGINAL LINK: https://www.nytimes.com/2024/04/24/arts/artificial-intelligence-assistants-parents.html

Monday, July 22, 2024

Japan’s copyright rules draw AI groups — and alarm from creators

Then $75 per month. Complete digital access to quality FT journalism on any device. Cancel anytime during your trial. Essential digital access to quality FT journalism on any device. Pay a year upfront and save 20%.

ORIGINAL LINK: https://www.ft.com/content/f9e7f628-4048-457e-b064-68e0eeea1e39

Library 2.0 School Library Summit - School Libraries and AI

EXCITING NEWS JOIN US! Join us for an exciting and transformative mini virtual conference, "School Libraries and AI," designed specifically for school librarians and educators passionate about the future of library services.

ORIGINAL LINK: https://elissamalespina.substack.com/p/library-20-school-library-summit

Confronting Impossible Futures

I speak to a lot of people in industry, academia, and government, and I have noticed a strange blind spot. Despite planning horizons that often stretch a decade or more, very few organizations are seriously accounting for the possibility of continued AI improvement in their strategic planning.

ORIGINAL LINK: https://www.oneusefulthing.org/p/confronting-impossible-futures

Sunday, July 21, 2024

AI Sparks a Creative Revolution in Business, With an Unexpected Twist

In the race to harness artificial intelligence (AI), businesses are discovering an unexpected wrinkle: AI that sparks individual brilliance may be flattening the creative landscape.

ORIGINAL LINK: https://www.pymnts.com/artificial-intelligence-2/2024/ai-sparks-a-creative-revolution-in-business-with-an-unexpected-twist/

AI Lets People Chat With 'Clones' of Departed Loved Ones

An artificial intelligence (AI) that lets you communicate with a digital version of a deceased loved one. It sounds like the stuff of science fiction. (In fact, it’s the basis for an episode of “Black Mirror.

ORIGINAL LINK: https://www.pymnts.com/artificial-intelligence-2/2024/ai-lets-people-chat-with-clones-of-departed-loved-ones/

Google's AI visionary says we'll 'expand intelligence a millionfold by 2045' thanks to nanobots, the tech will resurrect the dead, and we're all going to live forever

AI is undoubtedly the biggest technology topic of the last decade, with mind-bogglingly vast resources from companies including Google, OpenAI and Microsoft being poured into the field. Despite that the results so far are somewhat mixed.

ORIGINAL LINK: https://www.pcgamer.com/software/ai/googles-ai-visionary-says-well-expand-intelligence-a-millionfold-by-2045-thanks-to-nanobots-the-tech-will-resurrect-the-dead-and-were-all-going-to-live-forever/

Researchers develop framework to merge AI and human intelligence for process safety

Artificial intelligence (AI) has grown rapidly in the last few years, and with that increase, industries have been able to automate and improve their efficiency in operations. Contributors to this work are Dr.

ORIGINAL LINK: https://techxplore.com/news/2024-07-framework-merge-ai-human-intelligence.html

Saturday, July 20, 2024

Instruction Pretraining LLMs

A lot has happened last month: Apple announced the integration of on-device LLMs, Nvidia shared their large Nemotron model, FlashAttention-3 was announced, Google's Gemma 2 came out, and much more.  You've probably already read about it all in various news outlets.

ORIGINAL LINK: https://magazine.sebastianraschka.com/p/instruction-pretraining-llms

Friday, July 19, 2024

This Week in AI with Reed Hepler and Steve Hargadon (July 19, 2024)


We've released our newest "This Week in AI" recording, back on Fridays. Hope you enjoy! AI summary provide by summarize.tech: https://www.summarize.tech/youtu.be/nm0yXzk9HF8.


00:00:00 - 00:40:00

In the July 19, 2024 episode of "This Week in AI," hosts Steve Hargadon and Reed Hepler discussed recent developments in AI, including the release of Gpt-4 Mini, a smaller and faster version of the popular AI model, and the state of AI in education. They also explored the influence of AI on culture and the potential manipulation of large language models, the current state of AI and its relationship to human skills, and the fragility and potential dangers of over-reliance on technology. The hosts emphasized the importance of using AI as a collaborative tool, focusing on specific productivity tools rather than grand promises, and maintaining realistic expectations. They also touched upon the ethical considerations of creating deep fakes and the potential risks of centralized control and technology failures.

  • 00:00:00 In this section of the "This Week in AI" video from July 19, 2024, hosts Steve Hargadon and Reed Hepler discuss recent developments in the field of AI. First, they discuss the release of Gpt-4 Mini, a smaller and faster version of the popular AI model, which is designed for use in business applications and apps. The Mini version is less cumbersome to use and is being made available online for people to use. However, its effectiveness is still being debated, with some users reporting that it can answer questions effectively but may forget the last steps of complex prompts. The hosts also mention the release of a survey on the state of AI in education, which showed that only 46% of teachers and 36% of students believe that AI will be helpful in education. Despite this, 56% of educators plan to be more deliberate and pragmatic in their use of AI. The hosts suggest that people may not be using AI effectively or productively due to a lack of understanding of how to use it effectively.
  • 00:05:00 In this section of the "This Week in AI" video from 19 July 2024, the hosts Reed Hepler and Steve Hargadon discuss the use and perception of AI in education. Hepler mentions a survey by Quizlet, an Edtech company, which found that half of their audience doesn't use AI at all, and those who do often use it minimally. Hargadon shares another study where students using a specialized GBT tutor performed better than those using a regular chatbot model or having no access to AI at all. The hosts agree that the role of AI and how it's perceived shapes its effectiveness. They also emphasize the importance of proper training and framing when using AI in education to avoid unrealistic expectations and misunderstandings.
  • 00:10:00 In this section of the "This Week in AI" YouTube video from July 19, 2024, Steve Hargadon and Reed Hepler discuss the influence of AI on culture and the potential manipulation of large language models. Hargadon expresses concern over the shaping of responses by those in power and control, citing examples from China and the United States. He argues that the framing of AI is crucial in education and that people are becoming overly trusting of AI's human-like responses and consciousness. The conversation also touches on the impact of AI on families, with some children developing emotional attachments to AI tools like Alexa. Reed Hepler encourages listeners to read an article by Clance Elliot in Forbes Magazine for further insight into the topic.
  • 00:15:00 In this section of the "This Week in AI" video from July 19, 2024, Reed Hepler and Steve Hargadon discuss the current state of AI and its relationship to human skills. Hepler mentions that some people have reached a trough of disillusionment with AI, but this is only the case if they had unrealistic expectations. Hargadon adds that people are still trying to understand the vast capabilities of AI and that it's essential to recognize its limitations. They also discuss a study that found language models like ChatGPT memorize more than they reason, emphasizing the importance of understanding AI's data-driven nature. The conversation then touches on the human tendency to perceive AI as conscious and accurate, even when it may not be. The episode concludes with news about a representative from Virginia using AI to restore her voice after losing it.
  • 00:20:00 In this section of "This Week in AI - 19 July 2024", hosts Reed Hepler and Steve Hargadon discuss the advancements in AI technology that allow it to recreate a person's voice and speaking style with remarkable accuracy. They share an example of someone's speech being generated in Tucker Carlson's voice and posted on TikTok, which went unnoticed by most commenters. The hosts ponder the implications of this technology, including the potential for creating deep fakes of deceased loved ones and the ethical considerations of building relationships with AI personalities that mimic real people. They also touch upon the possibility of AI's predictive ability and the potential impact on human relationships.
  • 00:25:00 In this section of the "This Week in AI - 19 July 2024" YouTube video, Reed Hepler and Steve Hargadon discuss OpenAI's alleged roadmap to AGI (Artificial General Intelligence), which includes five levels: chatbots, reasoners, agents, innovators, and organization-wide AI tools. Hepler expresses skepticism about the plausibility of this roadmap. Steve Hargadon adds that OpenAI might be presenting this roadmap to alleviate safety concerns and that the company has a history of making surprising announcements. They also touch upon the potential dangers of a fully reasoning AI, which could expose power structures and manipulation, and the fragility of the electronic universe, including the potential risks of an EMP (Electromagnetic Pulse) that could take out most of the electronics in an area.
  • 00:30:00 In this section of the "This Week in AI" video from July 19, 2024, the hosts Steve Hargadon and Reed Hepler discuss the fragility and potential dangers of over-reliance on technology, specifically AI. They reflect on the impact of technology failures, such as the blue screen of death, which they compare to the Y2K issue. Reed Hepler shares his personal experiences with technology-related screens of death. The conversation then shifts to the risks of centralized control and over-reliance on technology in various industries, including transportation and finance. Steve Hargadon adds that the rapid growth and development of technology, particularly AI and supercomputers, increase the risks and make it challenging to ensure backup systems and prevent potential catastrophic failures. The hosts also touch upon the over-promising of AI capabilities and the importance of realistic expectations.
  • 00:35:00 In this section of the "This Week in AI" video from July 19, 2024, Steve Hargadon discusses the importance of focusing on specific productivity tools rather than grand promises of increased productivity through AI. He uses the example of the Covid-19 pandemic and how it has become integrated into daily life, and compares it to the integration of AI into various tools and applications. Hargadon emphasizes the need to remember that language isn't logic and that humans are ultimately responsible for the output of AI tools. Reed Hepler adds to the conversation by reflecting on the public discourse around Covid-19 and AI, and expressing his belief that AI is a long-term story that will become pervasive in what we do.
  • 00:40:00 In this section of the "This Week in AI" YouTube video from July 19, 2024, hosts Steve Hargadon and Reed Hepler discuss the theme of over-promising and the need for realistic expectations when it comes to AI. They emphasize the importance of using AI as a collaborative tool for research and productivity, while also acknowledging the potential for dependency on the technology. Hargadon uses the example of cars and cell phones to illustrate how humans have adopted and become dependent on technologies that we don't fully understand or have the ability to create ourselves. The hosts conclude by acknowledging that only time will tell if our dependence on AI is the right thing.

Why Americans Believe That Generative AI Such As ChatGPT Has Consciousness

Quick question for you before we get underway on today’s discussion. Are you conscious or exhibiting consciousness, right now, as you read this sentence?

ORIGINAL LINK: https://www.forbes.com/sites/lanceeliot/2024/07/18/why-americans-believe-that-generative-ai-such-as-chatgpt-has-consciousness/

Why Confidence In Your Unique Skills Is Crucial In The Age Of AI

Concerns about the implications of AI aren’t unfounded. Ethical complications, in addition to shifts in the workforce, make AI a hot topic. The technology is already creating new jobs while disrupting other industries, including content creation.

ORIGINAL LINK: https://www.forbes.com/sites/johnhall/2024/07/14/why-confidence-in-your-unique-skills-is-crucial-in-the-age-of-ai/

China is dispatching crack teams of AI interrogators to make sure its corporations' chatbots are upholding 'core socialist values'

If I were to ask you what core values were embodied in western AI, what would you tell me? Unorthodox pizza technique? Annihilating the actually good parts of copyright law? The resurrection of the dead and the life of the world to come?

ORIGINAL LINK: https://www.pcgamer.com/software/ai/china-is-dispatching-crack-teams-of-ai-interrogators-to-make-sure-its-corporations-chatbots-are-upholding-core-socialist-values/

Language models like GPT-4 memorize more than they reason, study finds

A new study reveals that large language models such as GPT-4 perform much worse on counterfactual task variations compared to standard tasks. This suggests that the models often recall memorized solutions instead of truly reasoning.

ORIGINAL LINK: https://the-decoder.com/language-models-like-gpt-4-memorize-more-than-they-reason-study-finds/

OpenAI unveils GPT-4o mini, a small AI model powering ChatGPT

OpenAI introduced GPT-4o mini on Thursday, its latest small AI model. The company says GPT-4o mini, which is cheaper and faster than OpenAI’s current cutting edge AI models, is being… The U.S.

ORIGINAL LINK: https://techcrunch.com/2024/07/18/openai-unveils-gpt-4o-mini-a-small-ai-model-powering-chatgpt/

Wednesday, July 17, 2024

We asked, you answered: nearly 50% of you never use AI at all

AI as a general technology seems to be here to stay. How it will actually play out and affect all aspects of our digital culture are yet to be seen at this point, but I think it is a fair assessment to assume big tech is going to keep betting on it for the foreseeable future.

ORIGINAL LINK: https://chromeunboxed.com/we-asked-you-answered-nearly-50-of-you-never-use-ai-at-all/

A.I. isn’t the thing. It may be the thing that gets us to the thing.

AI isn’t the thing. It’s the thing that gets us to the thing.

ORIGINAL LINK: https://www.ajjuliani.com/blog/ai-isnt-the-thing-its-may-be-the-thing-that-gets-us-to-the-thing

Ex-OpenAI and Tesla engineer Andrej Karpathy announces AI-native school Eureka Labs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More But with generative AI, this type of learning experience feels “tractable,” he noted.

ORIGINAL LINK: https://venturebeat.com/ai/ex-openai-and-tesla-engineer-andrej-karpathy-announces-ai-native-school-eureka-labs/

Ex-OpenAI and Tesla engineer Andrej Karpathy announces AI-native school Eureka Labs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More But with generative AI, this type of learning experience feels “tractable,” he noted.

ORIGINAL LINK: https://venturebeat.com/ai/ex-openai-and-tesla-engineer-andrej-karpathy-announces-ai-native-school-eureka-labs/

Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AI

AI companies are generally secretive about their sources of training data, but an investigation by Proof News found some of the wealthiest AI companies in the world have used material from thousands of YouTube videos to train AI.

ORIGINAL LINK: https://www.proofnews.org/apple-nvidia-anthropic-used-thousands-of-swiped-youtube-videos-to-train-ai/

Tuesday, July 16, 2024

Tone it down a bit

This is a story about two kinds of intelligence: artificial intelligence and emotional intelligence. How often do we hear those two terms together?

ORIGINAL LINK: https://www.understandably.com/p/tone-it-down-a-bit

Intuit's CEO Just Said AI Is the Reason He's Laying Off 1,800 Employees. His Memo Is the Worst I've Seen Yet

I've written a number of times about CEOs sending out an all-company memo detailing how the company is laying off employees. It's not a fun thing to write about, but I do think it's important especially as there are lessons every leader can learn.

ORIGINAL LINK: https://www.inc.com/jason-aten/intuits-ceo-just-said-ai-is-reason-hes-laying-off-1800-employees-his-memo-is-worst-ive-seen-yet.html

Advanced Microsoft AI Voice Cloning Deemed Too Dangerous for Public Use.

WE ARE 100% INDEPENDENT AND READER-FUNDED. FOR A GUARANTEED AD-FREE EXPERIENCE AND TO SUPPORT REAL NEWS, PLEASE SIGN UP HERE, TODAY. Microsoft has developed an advanced artificial intelligence (AI) text-to-speech program that achieves human-like believability.

ORIGINAL LINK: https://thenationalpulse.com/2024/07/15/advanced-microsoft-ai-voice-cloning-deemed-too-dangerous-for-public-use/

Saturday, July 13, 2024

OpenAI outlines plan for AGI — 5 steps to reach superintelligence

OpenAI has quickly become one of the most important AI companies, with models used by both Apple and Microsoft and its own productivity platform in ChatGPT with millions of monthly subscribers. But the company says its goal is still to build an AI superintelligence.

ORIGINAL LINK: https://www.tomsguide.com/ai/chatgpt/openai-has-5-steps-to-agi-and-were-only-a-third-of-the-way-there

Study proposes framework for 'child-safe AI' following incidents in which kids saw chatbots as quasi-human, trustworthy

Artificial intelligence (AI) chatbots have frequently shown signs of an "empathy gap" that puts young users at risk of distress or harm, raising the urgent need for "child-safe AI," according to a study. The research, by a University of Cambridge academic, Dr.

ORIGINAL LINK: https://techxplore.com/news/2024-07-framework-child-safe-ai-incidents.html

ChatGPT maker OpenAI developing new breakthrough reasoning technology code-named ‘Strawberry’. Why is it important?

2 min read 13 Jul 2024, 09:29 AM IST Sam Altman, chief executive officer of OpenAI, arrives for the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, US, on Tuesday, July 9, 2024.

ORIGINAL LINK: https://www.livemint.com/technology/tech-news/chatgpt-maker-openai-developing-new-breakthrough-reasoning-technology-code-named-strawberry-11720835396713.html

Study proposes framework for 'child-safe AI' following incidents in which kids saw chatbots as quasi-human, trustworthy

Artificial intelligence (AI) chatbots have frequently shown signs of an "empathy gap" that puts young users at risk of distress or harm, raising the urgent need for "child-safe AI," according to a study. The research, by a University of Cambridge academic, Dr.

ORIGINAL LINK: https://techxplore.com/news/2024-07-framework-child-safe-ai-incidents.html

OpenAI is reportedly nearing AI systems that can reason. Here's why that could be a cause for concern.

OpenAI has a new scale to mark its progress toward artificial general intelligence, or AGI. According to a Bloomberg report, the company behind ChatGPT shared the new five-level classification system with employees at an all-hands meeting on Tuesday.

ORIGINAL LINK: https://www.businessinsider.com/openai-nears-ai-systems-reason-cause-concern-sam-altman-chatgpt-2024-7

Friday, July 12, 2024

First “Miss AI” contest sparks ire for pushing unrealistic beauty standards

An influencer platform called Fanvue recently announced the results of its first "Miss AI" pageant, which sought to judge AI-generated social media influencers and also doubled as a convenient publicity stunt.

ORIGINAL LINK: https://arstechnica.com/information-technology/2024/07/first-miss-ai-contest-sparks-ire-for-pushing-unrealistic-beauty-standards/

Wednesday, July 10, 2024

Perplexity Just Stole ChatGPT's Best Feature and Is Doing a Better Job

Key Takeaways Perplexity AI introduced a new voice interface, outshining ChatGPT's feature with a better user experience and more natural conversational flow.

ORIGINAL LINK: https://www.howtogeek.com/perplexity-just-stole-chatgpts-best-feature-doing-a-better-job/

Tuesday, July 9, 2024

Japan’s Defense Ministry unveils first basic policy on use of AI

The Defense Ministry unveiled its first basic policy on the use of artificial intelligence on Tuesday, as Japan looks to stave off a manpower shortage and keep pace with China and the United States on the technology’s military applications.

ORIGINAL LINK: https://www.japantimes.co.jp/news/2024/07/02/japan/sdf-cybersecurity/

AI-Driven Behavior Change Could Transform Health Care

A staggering 129 million Americans have at least one major chronic disease—and 90% of our $4.1 trillion in annual health care spending goes toward treating these physical and mental-health conditions. That financial and personal toll is only projected to grow. We know this is unsustainable.

ORIGINAL LINK: https://time.com/6994739/ai-behavior-change-health-care/

Was Los Angeles Schools’ $6 Million AI Venture a Disaster Waiting to Happen?

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

ORIGINAL LINK: https://www.the74million.org/article/was-los-angeles-schools-6-million-ai-venture-a-disaster-waiting-to-happen/

Saturday, July 6, 2024

Viewing AI as Alien Intelligence

Large Language Models with Artificial Intelligence (AI) are neural networks whose hardware is very different from the human brain.

ORIGINAL LINK: https://avi-loeb.medium.com/viewing-ai-as-alien-intelligence-f1d08286267f

The AI-Writing Paradox

My father must have been using ChatGPT for over 50 years based on how people think AI-generated content can be identified at a glance. A well written content! But not all well-written content is AI-generated.

ORIGINAL LINK: https://debralawal.medium.com/the-ai-writing-paradox-43e4fbae2435

Friday, July 5, 2024

When AI Triggers Our Imposter Syndrome

Several people have spoken to me about not feeling qualified to talk to their students about AI. Such feelings aren’t limited to this emerging technology. For many, the imposter syndrome has become part of what it means to be a faculty member in higher education.

ORIGINAL LINK: https://marcwatkins.substack.com/p/when-ai-triggers-our-imposter-syndrome

Wednesday, July 3, 2024

AI's new cost-free knowledge hype clashes with reality



ORIGINAL LINK: https://www.axios.com/2024/07/03/ai-power-knowledge-cost-future

Google’s AI Overviews Coincide With Drop In Mobile Searches

A new study by search industry expert Rand Fishkin has revealed that Google’s rollout of AI overviews in May led to a noticeable decrease in search volume, particularly on mobile devices.

ORIGINAL LINK: https://www.searchenginejournal.com/googles-ai-overviews-coincide-with-drop-in-mobile-searches/521303/

Her guacamole recipe tops Google. She fears AI will change that.

Recipe bloggers want Congress to scrutinize Google’s “AI Overviews.”

ORIGINAL LINK: https://www.washingtonpost.com/politics/2024/06/26/google-ai-overviews-congress-web-publishers-raptive/

Listen to your favorite books and articles voiced by Judy Garland, James Dean, Burt Reynolds and Sir Laurence Olivier

Thanks to partnerships with their estates, the voices of legendary stars of stage and screen Judy Garland, James Dean, Burt Reynolds, and Sir Laurence Olivier are now part of the library of voices on our Reader App.

ORIGINAL LINK: https://elevenlabs.io/blog/iconic-voices/

Monday, July 1, 2024

AI Outperforms Students in Real-World “Turing Test”

A study at the University of Reading revealed that AI-generated exam answers often go undetected by experienced exam markers, with 94% of such answers going unnoticed and achieving higher grades than student submissions.

ORIGINAL LINK: https://scitechdaily.com/ai-outperforms-students-in-real-world-turing-test/

Former NSA chief revolves through OpenAI's door

Equally misguided is the contention by Western leaders that if Putin is not defeated in Ukraine, he will gobble up more and more of Europe, beginning with Poland and the Baltics.

ORIGINAL LINK: https://responsiblestatecraft.org/former-nsa-chief-revolves-through-openai-s-door/