Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
Risks and Opportunities as LLM Panic Hits SaaS Application Vendors. Commentary by Dr. Muddu Sudhakar, CEO & Co-founder Aisera
“Vendors of legacy SaaS applications are facing a new “existential crisis,” and incumbent tech companies are in an unusual position to capitalize on it, as customers have quickly come to expect generative AI in their applications. A recent McKinsey report highlights how generative AI will transform just about all categories of business, but customer operations, marketing, sales, software engineering and R&D, in particular – all very important parts of operations for SaaS companies.
When launching generative AI strategies, organizations might be tempted to create their own LLMs which can be very time consuming for an inexperienced team and cost them large sums of money. There are also the challenges of continuously maintaining and updating a platform that factors into the likelihood of sustained success. Although there are open-source systems that may lessen the cost burden, these systems still bring steep learning curves and can be temperamental and difficult to fine tune.
What organizations don’t realize is that this platform shift does not require a complete reengineering of their infrastructures — LLMs can be overlaid on their existing platforms, allowing them to get to the market much faster with less disruption.
Reinventing an AI system can be disastrous, therefore, the best strategy for SaaS enterprises is to integrate generative AI into their existing platforms by adopting best-of-breed LLMs. As some companies are floundering to create their own LLMs, incumbents like Microsoft and ServiceNow are benefiting. By leveraging their massive user bases to more easily train their LLMs, incumbents are reinvigorating their franchises, or enhancing their products and applications with Generative AI without reinventing the wheel. In turn, incumbents are reaping more value and revenue from AI.”
The Innovation That’s Making Generative AI Viable in Financial Services. Commentary by Chandini Jain, Chief Executive Officer, Auquan
“Interest in generative AI has reached a fever pitch throughout the financial services industry due to the prevalence of knowledge-intensive work involving vast volumes of unstructured data. Company due diligence, know-your-customer (KYC) procedures, ESG research, and risk monitoring are all highly manual and time-intensive use cases that appear tailor-fit for generative AI.
But so far, efforts to deploy generative AI tools built on large language models (LLMs) in financial services have failed to achieve success because they can’t access industry-specific data, they can’t cite their sources, and they tend to fabricate responses.
However, a new technique called retrieval augmented generation (RAG) is proving to be a breakthrough for financial services. RAG combines the power of retrieval-based models, such as the ability to access domain-specific datasets, with natural language responses of generative models. RAG-based systems minimize the risk of hallucinations, and they can cite their sources.
RAG-based AI is beginning to free finance professionals from mundane manual data work so they can focus on what they’re good at: conducting high-level analysis and making strategic decisions. Because RAG-based systems can quickly surface hidden risks in noisy, unstructured data — and can back up their work and build trust in the process — they’re proving valuable in helping finance teams make better decisions more quickly and outperform their peers.”
On data model collapse. Commentary by Laura Malins, VP of Product at Matillion
“Whilst it isn’t something that’s likely to affect a huge number of businesses from a technical point of view – only the tech giants with equally large budgets to create and maintain the AI models – model collapse is a very real risk.
The early signs of this include repeated content in the results generated by the models, as well as answering ‘mainstream questions’ – think capital cities for example – incorrectly. These are all flags of reduced intelligence in the models themselves. When the model starts to collapse, it’s missing the context of edge cases in that learning phase, and generating the same output.
Preventing model collapse is about taking ownership of the outputs – training the model as you would a child, by correcting mistakes and highlighting the correct response. It’s also important to regularly feed more training materials into the model, from a wide range of sources.
Model hallucination is more of a challenge, and one that is set to have a wider impact. Models hallucinate and get things wrong. How do we stop that? How do we reduce the frequency of these mistakes? Better prompts make a big impact, keeping a human in the loop and working with partners who are ensuring your data is AI-ready. There’s a phrase in the ETL community, ‘garbage in, garbage out’ and I think that applies here to AI too.”
AI-Powered Healthcare Transformation: Revolutionizing Value-Based Care. Commentary by Jay Ackerman, President & CEO, Reveleer
“Data and process fragmentation in the U.S. healthcare system contribute to administrative complexity and $265 billion in unnecessary costs. As the healthcare industry advances toward value-based care (VBC), payers and providers face increasing pressure to enhance quality care outcomes while lowering costs. AI-driven technology offers a promising solution to improve member care, drive quality outcomes, control costs, improve physician satisfaction, and advance VBC transformation. AI enables seamless patient data integration and actionable insights to predict disease progression, identify at-risk populations, and suggest appropriate interventions, reducing costs associated with advanced disease management.
AI also can deliver personalized treatment plans and medication regimens, leading to better adherence and outcomes and reducing the need for costly adjustments and hospitalizations. AI-driven patient engagement tools can educate and motivate patients to manage their health better. AI can optimize resource allocation and ease administrative burden by automating menial tasks so clinicians can focus on tasks requiring higher expertise. AI technology enables providers to monitor and analyze healthcare quality indicators for continuous improvement, driving quality of care, better patient experiences, and lower costs associated with avoidable errors.
The true transformative potential of AI lies in its capacity to rapidly extract valuable insights from various unconnected data sources and present healthcare providers with a comprehensive view of member risk before and during patient encounters. By harnessing AI in these capacities, healthcare organizations can enhance the adoption of technology by providers and redirect their attention toward value-based care, ultimately resulting in improved patient outcomes, cost reduction, and a more streamlined and effective healthcare system.”
On conversational assistants. Commentary by Reshma Iyer, Director of Product Marketing and E-Commerce at Algolia
“AI in the form of a conversational assistant can result in a rich and powerful customer experience as the AI plays the role of a trusted “guide” taking the customer through a journey that showcases results that are most likely to resonate. This is a creative and deeply engaging approach to bringing a shopper closer to a set of products they are likely to purchase. In the near future, we anticipate many of the related technologies will come together in a seamless conversational format which will fundamentally alter the customer service. Whether using text, videos, images, or voices, the customer’s reason to contact will be understood quickly and automatically. Their current history and other details will be connected in context and automatically, and aided by Generative AI, the virtual agent will be able to ascertain promptly the most appropriate path to follow with the specific situation on hand.”
On the new open letter calling for stronger regulation of AI before it harms society and individuals. Commentary by Amanda Brock, CEO of OpenUK,
“As the letter points out, AI and its pace of development offers many opportunities if managed carefully and distributed fairly. Advanced AI systems could help humanity cure diseases, elevate living standards, and protect our ecosystem. But like all new technology it is a risk and understanding the risk will regularly need to be reviewed. The letter identifies that we are already behind schedule for this reorientation and the importance of getting back on schedule cannot be overstated, that requires a focus on appropriate agile legislation that is light touch and agile – and which is able to flex and adapt to the unforeseen future. We must learn from the prescriptive approach to legislation which fosters inflexibility and not allow AI to fall into the trap that internet regulation has – where the regulation is out of date and infrequently updated meaning that the act of updating it has been torturous.
The call for collaborative use of budget on spending around ethics makes sense, but so too would greater collaborative innovation and sharing of R&D, models and data, with an open infrastructure for AI creating greater access and democratizing this vital technology. Again this would be demonstrably learning from history and not allowing our vital technologies of the future to end up in the hands of the few as they have in the past.”
On data center energy usage. Commentary by Ty Colman, Chief Revenue Officer, Optera
“Moving data centers to the cloud is practical and effective at mitigating emissions. The largest providers – Amazon, Google and Microsoft – have made public sustainability commitments and their operations are demonstrably low-emission. In addition, moving data centers to locations with clean energy sources like geothermal significantly reduces reliance on fossil fuels.
Other emerging approaches show promise and will require further investment before they’ll be efficient at scale. Data centers can employ low-global warming potential (GWP) refrigerants or novel refrigeration technologies to reduce cooling emissions, and explore waste heat re-use. Ultimately, though, rapid decarbonization through renewables, either on-premises or through moving to cloud storage, is the fastest path and can save more than 90% of your data center emissions.”
AI can help with Alzheimer’s clinical trials. Commentary by Carl Foster, Chief Business Officer at Standigm
“There are many potential improvements that AI can make that could increase the efficacy of clinical trials. A primary area would be using AI to create better biomarkers, which are the primary point of categorization used to design clinical trials. This is particularly true, when it comes to diseases that are especially difficult to diagnose, such as Alzheimer’s. For Alzheimer’s, it is often not until a patient is deceased and an autopsy is performed that it can be diagnosed, a challenging factor that stands in the way of identifying patients for any potential Alzheimer’s clinical trial compared to other kinds of dementia which are diagnosed with greater ease.
The problem with biomarkers is that they often identify the wrong patient. However, if AI allows you to look at patients’ genes and proteins and identify that they are likely to have the disease, you are more likely to find the right biomarkers to use for designing a better clinical trial. Ultimately, if you can correctly enroll patients in a treatment, fewer patients will be required for the trial. Further, if you believe in a genomic or protein biomarker for some disease, you can enroll or exclude patients from a clinical trial based on that biomarker even if it is not yet FDA approved.”
How AI is used for cyber attacks. Commentary by Phil Mason, CEO, CyberCX UK
“AI has huge potential. In terms of cyber security, there are ways in which it can support and enhance defences. But, there are major risks in AI as an enabler of cyber-attacks, with the technology being increasingly used to enable more advanced and sophisticated cyber-attacks of late. The most important thing is to know these risks, as this allows us to defend against them.
One benefit of AI for cyber criminals is the ability to train machine learning algorithms to detect vulnerabilities and misconfigurations, exploit them, and launch attacks with minimal human intervention. Furthermore, threat actors are employing AI powered automation to launch cyber-attacks, such as automated malware, targeted phishing campaigns, and ransomware distributions. Even social engineering attacks are benefiting from AI, where the technology is used to analyse and replicate specific human behaviours to improve manipulation of targets.
For data breaches, AI can be used to analyse and collect massive quantities of personal information from various sources. Not only can better sourcing and analysis of personal data allow criminals to hold individuals to ransom at a great scale, but they can be more sophisticated in social engineering attacks, such as spear phishing, in which AI can assist assailants in crafting highly convincing messages, including the use of deep fakes, to improve their chances of success.
Furthermore, the combination of AI’s ability to write code (be it in order to evade security measures or develop advanced malware) and to learn at an incredible pace (and therefore pivot, writing even more new code) means it can stay undetected for longer, and do more harm in the process.
While malicious AI is a threat to security, the same abilities which make it so also make it a huge opportunity to combat such attacks. As always, the aim of the game is to develop better defence technology faster than malicious actors; and in the case of AI, this will no doubt involve leveraging AI itself to enhance defenses.”
How AI can bridge the cybersecurity skills gap. Commentary by Dov Goldman, VP Risk Strategy at Panorays
“A staggering 68% of organizations say they are exposed to additional risks due to the cybersecurity skills shortage, according to the 2023 Cybersecurity Skills Gap Report. AI can play a crucial role in bridging the cybersecurity skills gap by augmenting the capabilities of cybersecurity professionals and helping to improve the overall security posture of organizations. As cybercriminals become more advanced and leverage AI to launch stealthier attacks, organizations will also need to leverage AI to defend itself.
AI is instrumental in automated threat detection and response.AI-powered security tools can continuously monitor network traffic, system logs, and user behavior to detect anomalies and threats in real-time. This allows organizations to have an upper hand when a threat is imminent and easily pinpoint, address and mitigate any potential security issue. This level of granular security detection is also instrumental in the supply chain where cyber threats can have a domino effect. Supply chain disruptions can impact multiple parties involved and cause immense long-term havoc.
AI can also assist in gathering, analyzing, and disseminating threat intelligence from various sources, including the dark web, known vulnerabilities, and historical attack data. With access to predictive intelligence, AI can pinpoint potential vulnerabilities in the system and address them accordingly. AI also looks at behavior with immense precision. AI can establish a baseline of normal user and system behavior, enabling it to detect deviations indicative of malicious activity. It can also use historical data and machine learning models to predict potential risks and vulnerabilities.
As AI becomes adopted across industries, AI-based governance will be more widespread. This innovative framework empowers organizations and governments to proactively detect, mitigate, and respond to evolving cyber threats. According to Oh Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2023, 26% of respondents reported having access to, and taking advantage of, cybersecurity training. However, 64% noted they had no access to training. This is where AI can bolster training numbers. AI-powered training platforms can help cybersecurity pros enhance their skills with personalized learning paths and hands-on training through simulations and scenarios. This hands-on training can upskill cybersecurity pros and prepare them for the realities of an actual cyber attack.”
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW
Speak Your Mind