This podcast currently has no reviews.
Submit ReviewThis podcast currently has no reviews.
Submit ReviewEffective communication is an important skill to have. And, in this AI-era it's more important than ever. In this episode of AI Today hosts Kathleen Walch and Ron Schmelzer interview Patti DeNucci. She is an author, speaker, workshop facilitator, consultant and keynoting the PMI Austin, TX Professional Development Day May 2, 2024.
How does AI impact communication?Generative AI tools are having a profound impact on the way people work, write, and communicate. In this episode Patti discusses how GenAI tools are changing the way people communicate. She also discusses ways that AI can improve the way people communicate. Sometimes you can't find the write words for an email or a note to a friend. With the help of GenAI you can now get the conversation started, or fully written, with just a few sentences in your prompt.
Patti also touches upon how AI is changing and impacting the project management profession and role. Project managers can leverage their soft skills of communication as well as critical thinking, adaptability, and attention to detail to build conversational skills with AI.
Show Notes:
Experimenting, testing, and refining your prompts are essential. The journey to crafting the perfect prompt often involves trying various strategies to discover what works best for your specific needs. A best practice is to constantly experiment, practice, and try new things using an approach called “hack and track”. This is where you use a spreadsheet or other method to track what prompts work well as you experiment. In this episode of AI Today hosts Kathleen Walch and Ron Schmelzer discuss hack and track in detail.
Keeping track of promptsIt's rare to get the desired response on your first attempt. An iterative process of testing different prompts, analyzing the responses, and then tweaking your approach allows you to gradually hone your technique. Another challenge is that LLMs are constantly evolving. The performance of LLMs is very much domain and task dependent, and the performance will change over time. A current prompting best practice is to use a spreadsheet or other method to track what prompts work well as you experiment.
How to set up your Hack and Track SpreadsheetKeeping track of prompts that work best for you in which situations, including which LLMs are providing the best results for you at that time, can be incredibly helpful for your colleagues as well. There are many LLMs, and at any one particular time, one LLM may perform better than another in a given situation. Without keeping track of prompts you've written and tested, it's hard to have others try to use these prompts themselves.
When creating a spreadsheet to keep track of prompts, the details matter. Every spreadsheet may be set up a little differently but you'll want to include some essentials. Criteria you can use when setting up your hack and track sheet include: Name of the task or query, Prompt Pattern(s) used, LLM used, date last used for this prompt, Prompt chaining approach used if any, and maybe the person or group that created the prompt.
Kathleen and Ron discuss their own experiences with hack and track in this episode and how learning from others is so critical. By seeing how others are writing prompts it helps you get creative an think of ways to use LLMs you may never have thought of. It also lets you see how others at your organization are writing prompts and the results they are having.
Show Notes:
Plugins for Large Language Models (LLMs) are additional tools or extensions that enhance the LLM's capabilities beyond its base functions. In this episode hosts Kathleen Walch and Ron Schmelzer discuss this topic in greater detail.
Can I use plugins with ChatGPT?Plugins can access external databases, perform specific computations, or interact with other software and APIs to fetch real-time data, execute code, and more. In essence, they significantly expand the utility of LLMs, making them more versatile and effective tools for a wide range of applications. They bridge the gap between the static knowledge of a trained model and the dynamic, ever-changing information and capabilities of the external world. Plugins can be used on many different LLMs.
Why use plugins?People use plugins for a variety of reasons. They allow you access to Real-time information by accessing up-to-date information from the web or other data sources,. They can also an perform specialized tasks like solving complex mathematical problems, generating code, or providing translations with expertise that might not be fully developed in the base model. Plugins also enable LLMs to interact with other applications and services, allowing for dynamic content generation, automation of tasks, and enhanced user interactions. They also allow for customization and personalization as well as improved performance and efficiency. In the episode we discuss this all in greater detail.
Show Notes:
As folks continue to use LLMs, best practices are emerging to help users get the most out of LLMs. OpenAI's ChatGPT allows users to tailor responses to match their tone and desired output goals. Many have reported that using custom instructions results in much more accurate, precise, consistent, and predictable results. But why would you want to do this and why does it matter? In this episode, hosts Kathleen Walch and Ron Schmelzer discuss why this is a best practice.
What are custom instructions in ChatGPT?In ChatGPT, custom instructions are provided by answering two questions asked in settings that get sent along with your prompts:
It's important to note that once created, these instructions will apply to all future chat prompt sessions (not previous or existing ones). This allows you to make somewhat permanent settings that don’t have to be constantly reset. Custom prompt instructions are short and generally limited to about 1500 characters, so keep it precise and concise.
Show Notes:
Companies of all sizes in every industry are looking to see how Artificial Intelligence (AI), machine learning (ML), and cognitive technology projects can provide them a competitive edge. They want to provide efficiencies and improve ROI in today’s competitive landscape. As a result, this creates tremendous opportunity in the field of AI for professionals who are CPMAI certified and follow the CPMAI methodology. This globally recognized certification helps you stand out from the pack when it comes to scoping, managing, and understanding AI and ML projects. In this episode of the AI Today podcast we interview George Fountain. He is Senior Project Manager at Booz Allen Hamilton (BAH), and is CPMAI certified.
What is CPMAI?Staying up to date on AI in today's rapidly changing world is critical. However, just reading the news about AI versus actually being able to apply AI best practices to projects are two different things. As a senior project manager, George needs to have a step by step framework to successfully run and manage client projects. With the recent buzz around GenAI this has now become more important than ever. CPMAI is providing the framework needed. George shares how he is applying CPMAI in the real world.
Is the CPMAI certification worth it?When it comes to investing your time and money into certifications you want to make sure it's worth it. For George, CPMAI has been a transformational certification to his career. It's helped him grow key skills including providing the knowledge, lexicon, and credibility he was after. George shares his feedback and experience on this podcast. He wraps up the discussion by sharing how he believes AI will impact organizations in the years ahead.
Show Notes:
To improve the reliability and performance of LLMs, sometimes you need to break large tasks/prompts into sub-tasks. Prompt chaining is when a task is split into sub-tasks with the idea to create a chain of prompt operations. Prompt chaining is useful if the LLM is struggling to complete your larger complex task in one step. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer discuss prompt chaining. This is part 2 in our 6 part series on Prompt Engineering Best Practices.
Why do you need to chain the prompts?In this episode we explain when and why to use prompts. Chaining prompts are great, and is a best practices approach, when you're answering questions from information in a document. It's also good when you want to validate & refine Responses. Or, when you're looking to Simplify Writing long-form content and break the writing process into outlined sections or chapters that the AI can expand upon in sequence. Prompt chaining is also good for providing a stepwise approach to research projects. And, for anything that needs an iterative approach such as iterative data analysis, iterative Approach to Computer programming, and iterative approach to other process-based tasks.
What is the prompt chaining technique?There are a few different types of chain prompts. We go over Chain-of-thought prompting which is breaking down big tasks. We also discuss Self-consistency and ReAct techniques which is thinking harder and smarter. And, we discuss Flipped Interaction as well as Question Refinement Pattern.
When looking to chain prompts there are some best practices to follow. These best practices include decomposing the task, crafting and testing your prompts, chaining prompts, and evaluate and iterate as needed. Your prompt won't be perfect on the first try. Like we always say: Think Big, Start Small, and Iterate Often.
This is part 2 in our 6 part series on Prompt Engineering Best Practices. Subscribe to AI Today to get notified of upcoming episodes in this series and learn the best practices for prompt engineering from Cognilytica AI thought leaders.
Show Notes:
LLMs are basically big “text predictors” that try to generate outputs based on what it expects is the most likely desired output based on what the user provides as the text-based input, the prompt. Prompts are natural language instructions for an LLM provided by a human so that it will deliver the desired results you’re looking for. In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer dig into using a prompt pattern. They explain what prompts are, why good prompts are so important, and why using prompt patterns are a prompt engineering best practice.
What is prompt patterns?To ensure your prompt contains all essential elements for optimal results, adopt a structured "formula" approach. Most of the popular prompt pattern formulas for creating prompts include aspects of the following: Act as a [ROLE], Performing a [TASK], Responding in a [FORMAT]. There are many different prompt patterns out there, and the right one for you depends on your needs. There are a number of considerations for determining which Prompt Engineering pattern or formula to consider. In the podcast we discuss the RTP and CREATE patterns and provide examples of when to use both prompt patterns.
This is part 1 in our 6 part series on Prompt Engineering Best Practices. Subscribe to AI Today to get notified of upcoming episodes in this series and learn the best practices for prompt engineering from Cognilytica AI thought leaders.
Show Notes:
AI is helping to re-imagine experiences of all sorts, including air travel. At the 2024 SXSW Conference and Festivals Bernadette Berger, who is Director of Innovation at Alaska Airlines presents on “The Sky's the Limit: How AI will Re-imagine Airports”. In this episode of the AI Today podcast hosts and AI thought leaders Kathleen Walch and Ron Schmelzer have the opportunity to interview Bernadette.
How is AI transforming the aviation industry?Transforming industries, especially ones that have been around for a long time, is not always easy. Yet, AI is transforming the aviation industry and passenger experience in some profound ways. From facial recognition technology to help identify passengers to predictive analytics to help with logistics and routing, aviation and travel are already seeing the benefits of AI. Additionally, if you add personalized digital assistants and real-time navigation airports will become fully accessible and immersive hubs for travel, retail and entertainment. Bernadette shares with our listeners what this AI-enabled vision will look like.
How is AI affecting the airline industry?Already impacting passenger and customer experiences, AI is only going to continue to have an impact. Bernadette provides examples of what's currently happening and how AI will continue to transform these experiences in the future. But, with any technology the good comes with its challenges. Bernadette shares how AI will Re-imagine Air Travel. She also shares some of the challenges with AI adoption within the transportation industry. And, what can be done to help with these challenges both now and in the future.
This podcast was recorded as part of the AI Today @ SXSW 2024 podcast activation. This podcast is sponsored by Intel, the spark for the dreamers who do. They dream of a life with no diseases, of cleaner, greener, more reliable energy of advancing education by bringing AI everywhere. Intel is the spark to start something new. To know that no dream is too daring when you have the right foundation. It starts with Intel. Learn more at Intel.com/starts
Show Notes:
AI is having an impact on every industry, including healthcare. AI's impact in the practice of medicine is helping to reshape the practice of medicine both now, and in the future. In this episode of the AI Today podcast we interview Dr. Jag Singh. He is a Professor of Medicine at Harvard Medical School, focusing on the application of AI in the practice of medicine. He is also a practicing cardiologist at Mass General Brigham (MGB). He also recently presented at the 2024 SXSW Conference and Festivals on this topic as well.
What is the impact factor of AI in medicine?Hyperpersonalization, one of the 7 patterns of AI, is being applied to healthcare. Dr. Singh shares how he sees AI impacting personalized healthcare both now and in the years ahead. Additionally, he shares how virtual care, the role of sensors, and other AI technologies will impact care for patients in the years ahead.
Dr. Singh also shares insights on what he sees as the most likely application for AI in healthcare in the short term and long term. As well as the critical success factors for AI's impact in the practice of medicine. He wraps the podcast on an inspirational note with what he believes the future of AI is in general and its application to organizations and beyond.
This podcast was recorded as part of the AI Today @ SXSW 2024 podcast activation. This podcast is sponsored by Intel, the spark for the dreamers who do. They dream of a life with no diseases, of cleaner, greener, more reliable energy of advancing education by bringing AI everywhere. Intel is the spark to start something new. To know that no dream is too daring when you have the right foundation. It starts with Intel. Learn more at Intel.com/starts
Show Notes:
During the SXSW 2024 event, Wei Li presented on AI Everywhere with Software and Hardware. In this episode of the AI Today podcast we interview Wei Li. He is VP/GM of the AI Software Engineering Team at Intel.
He shares with what insights from that talk and what he means by AI being everywhere in both hardware and software. Additionally, he shares the most successful application of AI projects both at Intel and with customers.
But, with success there can also be challenges. Wei provides to our listeners his advice, lessons learned, and potential gotchas he has seen over the years. He provides insights to others as they look to run their run their data, analytics, and AI projects. He ends the podcast on an inspirational note sharing what he believes the future of AI is in general and its application to organizations and beyond.
This podcast was recorded as part of the AI Today @ SXSW 2024 podcast activation. This podcast is sponsored by Intel, the spark for the dreamers who do. They dream of a life with no diseases, of cleaner, greener, more reliable energy of advancing education by bringing AI everywhere. Intel is the spark to start something new. To know that no dream is too daring when you have the right foundation. It starts with Intel. Learn more at Intel.com/starts
Show Notes:
This podcast could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review