Amazon recently announced the launch of a new $50 million program called the AWS Worldwide Public Sector Generative AI Impact Initiative, which offers public sector organizations (and "those that support them") free Amazon technical resources to support their Gen AI-based programs.
Participants will get free access to Amazon AI and infrastructure tools like:
-
Amazon Bedrock: Amazon's tool for building generative AI applications.
-
Amazon Q: Amazon's coding and business-focused chatbot.
-
Amazon SageMaker: Amazon's tool for building machine learning models.
-
AWS HealthScribe: Amazon's tool for generating clinical notes directly from doctor-patient conversations.
-
AWS Trainium: Amazon's machine learning chip; and,<
-
AWS Inferentia: Amazon's hardware accelerators that speed up deep learning processing within Amazon's Elastic Compute Cloud (EC2)
The free access will come in the form of promotional credits. Training resources, consulting and support will also be offered.
In its announcement of the initiative, Amazon said it was launching it because: "Across the public sector, leaders are seeking to leverage generative AI to become more efficient and agile. However, public sector organizations face several challenges such as optimizing resources, adapting to changing needs, improving patient care, personalizing the education experience, and strengthening security. To respond to these challenges, AWS is committed to helping public sector organizations unlock the potential of generative AI and other cloud-based technologies to positively impact society."
Organizations can sign up to find out more about the program here.
Posted by Becky Nagel0 comments
Microsoft Research's Dr. James McCaffrey is a well known technology and AI expert, and author of the popular column "The Data Science Lab," among others. And while he comes from the tech side, he also regularly thinks, writes and speaks about future of AI and its practical implications.
We here at AI Boardroom recently got a chance to ask Dr. McCaffrey about his thoughts on how AI is changing business and what advice he has for executives to stay ahead. Here's what he shared with us:
AI BOARDROOM: AI has always been used by businesses to help them innovate. Do you have any favorite examples?
DR. MCCAFFREY: AI, as we think of the term today, is new when it comes to business innovation. But machine learning, a subset of AI that focuses on predictions made by using data plus an algorithm like a neural network classifier, has been around for many years. I was involved with a couple of projects that predicted the results of American football games with 70 percent accuracy against the Las Vegas point spread. With the rise of legalized gambling, I'm sure there are similar efforts under way.
How do you see AI changing businesses and the future of work?
The rise of AI resembles the birth of the Internet in some ways. Some effects were obvious and immediate. The death of traditional newspapers and print magazines is one example. Other changes were longer in coming and not nearly as obvious. For example, the profound psychological impacts of social media on literally billions of people was not really anticipated. The rise of AI is the same. The world's first trillionaire will likely be someone who has a long-term vision for an AI application that others don't see.
You've seen a lot of AI implementations. Do you see common threads among people that have successful AI projects? Or, conversely, unsuccessful projects?
Most of the successful AI projects I've seen have been relatively small scale -- ones that didn't bite off more than they can chew. These small projects typically involve only about three people.
What AI-related technology are you most excited about right now?
I'm closely following efforts to fine-tune large language models. A large language model like GPT-4 is somewhat like a high school senior who knows English grammar and has a Wikipedia-level knowledge of basic facts. A fine-tuned large language model is somewhat like a PhD graduate who has deeper knowledge of a specific area. As recently as 12 months ago, fine-tuning a large language model wasn't feasible except by large tech companies. But tools are being rapidly developed and the ability of an ordinary human to create a custom fine-tuned large language model likely isn't far off. A related development I'm watching closely is how these fine-tuned agents can communicate with each other.
How deep of an understanding of AI technology does, for example, a CEO need in today's world?
History is littered with stories of businesses that missed key opportunities or threats. For example, in 1990, the average length of time that a Fortune 500 company stayed on the index was only 20 years. It's predicted that only half of the current 500 companies will be on the index 10 years from now. CEOs must have a grasp of AI at a high level so that they can hire and understand technical advisors -- and separate hype from practicality.
BONUS QUESTION: What is one thing about AI you want AI Boardroom's readers to know?
AI is both over-hyped and under-hyped. Successful businesses will be the ones that can distinguish unrealistic claims and goals from practical opportunities. Put another way, don't believe everything you read about AI -- including this interview!
You can read more from Dr. McCaffrey on AI via his column "The Data Science Lab" on VisualStudioMagazine.com and his regular contributions to PureAI.com. He can be found on LinkedIn here.
Posted by Becky Nagel0 comments
You're probably hearing a lot lately about AI chips or AI processors and how important they are for the coming AI age (and you've seen NVIDIA's stock reap the benefits!). What, exactly, makes an AI chip different, and what are the actual use cases? For example, are they only important for companies such as OpenAI that makes large LLMs? Can they help your company with its internal AI projects? Do you need them in your laptop if you're running software that has AI features? We'll look at all this and more and offer you links for further reading.
One caveat before we start: Any company can call any chip an "AI" chip -- there is no official standard. All major chip companies are reputable in their manufacturing, but we bring this up because, just like anything labeled "AI" these days, you'll always want to do your own research, especially when making end-user purchases.
What Exactly Is an AI Chip?
Without getting too far into the technical weeds, AI chips are much like other computer chips except that certain features have been optimized to process AI calculations more quickly and efficiently (especially when several chips are combined -- known as multi-core processing).
When it comes to AI, the biggest of these features is parallel processing, which, in its simplest form, means that the chip(s) can simultaneously process many tasks instead of one. Of course, parallel processing has been around for a while, and it's not just used for AI. However, in AI chips, the parallel processing (and other features) in the chip are tailored for AI operations, often at the expense of other operations. If you're running a deep neural network (DNN), for example, which is a type of machine learning algorithm, you're probably running the same kind of mathematical calculations over and over again. Knowing this, the manufacturer can customize the parallel processing features specifically to these, significantly boosting the output, and saving the companies that run these machines (particularly at large scale) everything from time to electricity costs.
It's not just parallel computing architecture that's important. The chip manufacturers can (and do) optimize other aspects of their chips for these kinds of calculations as well. For example, NVIDIA's tensor core graphical processing units are specifically designed to "accelerate the matrix computations involved in neural networks," according to the company.
When Do You Need AI Chips?
Because AI chips are customized for the processes needed for most machine learning, they are very valuable to any company running large AI projects. However, they can also help smaller companies that are running projects that have similar calculations -- it's going to take less time if the chips used for the processes you're running are designed for that process. That's really what we're talking about here with AI chips: They're customized to run AI processes (which are pretty intense, so a lot of work went into this!), although it can mean there could be a downside when used on a large scale for general computing. Specifically, the companies may want processors customized to other types of processing if AI isn't what they're doing daily.
What about AI chips on your laptop or desktop? Can they really help you?
According to this paper from the Center for Security and Emerging Technology (CSET), it's not really the "AI" part of the chips that can help consumers but rather the benefits of all the engineering that has gone into these chips. The Center notes that because of all the re-engineering it took to make these chips so efficient (and using smaller and smaller surfaces), AI chips are now often "more cost-effective than general-purpose chips." Although you might not need the AI customization these processors offer in your day-to-day work, the power you're getting for the price you're paying often make a computer powered by "AI chips" worth it, even if you never dive into machine-learning algorithms or natural language processing (such as chatbots).
Further Reading
This article is aimed at a general audience, so much of the technical information here is greatly (frankly, overly) simplified. To learn more about this topic, we suggest starting with these resources:
- "AI Chips: What They Are and Why They Matter" Whitepaper by Saif M. Khan
- "AI Chips Supply Chain" Web Site
- AI Chip Vendor Breakdown on Github
Posted by Becky Nagel0 comments
Last week OpenAI announced Sora, its text-to-video generator -- and the results are, for the most part, pretty darn impressive. However, just like other AI text-to-video products, such as those from Google (VideoPoet), Meta (Make-A-Video) and Microsoft (Godiva), none of these services are available to the public yet. Rather, they are in "research mode" which means that beyond the lucky few chosen to participate in the pre-release trials, none of us really know how well they work beyond the examples given by the companies who created them.
In previous technical waves, when technologies were shown but not given access to -- especially those with major wow factors -- there was always a hint of "vaporware" in the mix. And although that could still be true to some level with text-to-video AI, there are a number of legitimate reasons that these companies could be holding off on expanding access.
To start, the U.S. presidential elections are coming in November 2024. Just five days ago, OpenAI, Microsoft, Adobe, Meta and other AI and social media companies signed a pledge in Munich to "collaborate on developing tools for detecting misleading AI-generated images, video and audio", and to develop watermarking/metadata technology, the companies said. Considering that most technical experts agree watermarking won't work, and that metadata itself can be easily stripped, some analysts speculate that this next level of AI -- one that would put the ability to create deep fake videos into wide public accessibility -- won't go live until after the actual election.
There may also be issues of cost and infrastructure. Right now, without video generation, it's estimated that ChatGPT costs OpenAI $700,000 a day to operate. Videos will be exponentially more expensive to generate; there are reports that each video -- which may range from seconds to a minute -- can take hours each to generate. Some providers may not have worked out a cost structure that makes it feasible to release these tools to a wider audience just yet (if they ever can be).
Finally, even when things work well in the AI realm, things go wrong. Just this week, Google had to temporarily pull Gemini's image generator because of an issue they likely never expected. So by rolling out slowly, vendors are giving themselves time to get as many quirks out as possible (although some may argue against excess caution, pointing to the theory that Facebook's Blender failed a few days before ChatGPT took off simply because it was made too safe).
So for right now, when it comes to AI text-to-video, expect it to remain in "curated demo" mode for a while, unless you're lucky enough to be in the hands-on preview group for Sora, VideoPoet, Make-A-Video or Stable DIffusion's Stable Video. We'll keep you posted as each service (and new competitors) roll out to wider markets.
Posted by Becky Nagel0 comments
Here's the top gen AI news this past week:
- This appears to be a week for weird AI. As mentioned above, Google pulled Gemini for image issues, and earlier in the week, OpenAI basically had to "reset" ChatGPT/underlying models as they started spouting nonsense.
- On a positive note, Open AI announced that it's "improved the memory" of ChatGPT chats, basically making ChatGPT more intuitive and user-friendly.
- On Wednesday, Google announced Gemma, an open source version of its Gemini LLM (formerly known as Bard). (BTW: If you’ve ever wanted to play with an LLM directly and you're not a data scientist, there's a nice guide for getting your hands dirty with Gemini available here).
- Looking for examples of how companies are using AI? How about this one exploring how it’s being used to accelerate mining discoveries.
- Or this one creating an AI biology model.
- Or this one discovering new materials.
- Or this one about a small weather prediction company bypassing competitors thanks to its use of AI.
- If you're looking for a new way to use generative AI in your company day-to-day, Adobe just added the ability to "chat" with documents to its PDF tools.
- Of course, adding AI tools isn't always a slam dunk, as JetBrains recently found out. They recently removed an AI assistant from their programs (and are being praised for doing so).
- Speaking of AI technology possibly gone too far, employers seem to be gravitating to (and candidates seem to be pulling away from) these new AI-driven personality tests fused into online job applications (although, to be honest, we're not sure we see what the AI angle is here).
Posted by Becky Nagel0 comments