AI dominated the news in 2023. As this emerging technology evolves along with use cases, it has the potential to both be an assistive asset to business and pose competitive threats. How will financial advisors adapt to this technology and potentially face a competitive threat as clients will also have access to AI tools for investment?
Questions exist around data governance and ownership. Will 2024 be the year of the adoption of AI in financial services?
Strategic Leadership in the Generative AI Era
Today’s rapidly changing global landscape offers challenges and opportunities for business leaders. They confront constant disruptions, and generative artificial intelligence is a significant driver of some of these changes. Every day new information and developments emerge, making it challenging for leaders to decide the best course of action.
In this dynamic environment, adopting a passive stance could jeopardize progress. It is essential for leaders to understand both the immense potential and the associated risks of generative AI early on. Failure to do so may lead to missed opportunities and a loss of competitive ground to rivals who are more agile in testing and adopting emerging AI capabilities.
One common question is whether generative AI’s prominence is merely hype. While some may exaggerate the reality of where some of the capabilities are today, there’s no denying its transformative power. Using insights from history and previous tech revolutions, leaders can better understand and navigate this AI phase.
Historically, the initial skepticism around the internet’s emergence turned into an acknowledgment of its massive impact. Similarly, the boom in marketing technology tools offers lessons on the pitfalls of hasty adoption without a strategy. The generative AI revolution, while mirroring past tech trends, is unique in its rapid development and potential impacts. This accelerated pace is due to existing infrastructures and prior technological advances, coupled with it being released to the masses.
Recent reports highlight the growing adoption of AI. Thus, to harness its benefits and avoid its challenges, businesses should draw from past lessons and current knowledge.
In the realm of financial advisory, AI has the potential to become an indispensable tool for financial advisors, a group whose work heavily relies on intellectual capabilities and knowledge-based decision-making. Generative AI, in particular, stands to augment financial advisors’ capabilities, while increasing efficiencies, enabling sophisticated management and utilization of their intellectual property (IP) when leveraged within secure, non-public domains.
Questions are being raised on how to continue to provide added and differentiated value when clients and competitors have access to the same emerging tools and access to a new level of intelligence offered by these platforms. Additionally, it is emphasized that this technology does not replicate the human elements of empathy, trust, emotion and personal connection, which are cornerstones of the client-advisor relationship.
This is pushing financial advisors to move beyond focusing on specific tools or technologies. Instead, they are considering how AI integrates into their individual strategies or their firm’s overarching goals, culture, client needs, cybersecurity, privacy, and compliance requirements.
Acknowledging the benefits and challenges of AI, particularly in financial advising, highlights its wider impact on their independent or organizational strategy. This goes beyond adopting technology, encompassing scenario planning and change management. As individuals and companies embed AI into their workflows and operations, they need to adjust their strategies and mindsets for diverse future scenarios, preparing for the significant changes AI introduces to maintain resilience and agility in a dynamic business environment. The evolving nature of work demands reskilling and constant learning. This isn’t just about implementing software but reshaping the organization’s culture and mindset for an AI-powered world.
As we navigate the complexities of the generative AI era, it is crucial for individuals and businesses to stay informed and proactive. The lessons of the past and the realities of the present offer a clear message: Success hinges on strategic planning and a willingness to embrace change. For those in the realm of financial advising and beyond, it is not just about adopting new technologies, but about integrating them in a way that aligns with your organization’s long-term vision and ethical standards. The road ahead is not without challenges, but by prioritizing continuous learning and adaptability, businesses can leverage AI to not just meet the demands of today, but thrive, setting standards for the future.
– Lynda Koster, Cofounder & Managing Partner, Growthential
Ask an Expert: Cutting Through the AI Hype
Q: Is AI as disruptive as the hype suggests?
Depends where you’re looking. In a report on the future of generative AI, McKinsey & Company examined the potential for automating work across several industries, and found that knowledge work was most prone to disruption from generative AI. Workforce training, for example, is highly susceptible to automation, whereas agriculture and transportation less so. Creative industries, such as media and the arts, were also high on the list.
Q: How should I think about using AI in my organization?
Even in industries that are ripe for disruption from AI, like marketing, you’re more likely to find usage of generative AI at small to midsize companies, since they tend to be more willing to experiment. Use of generative AI is often a bottom-up phenomenon, with individuals in an organization finding efficiencies by applying the tools to their specific work, usually with a lot of trial and error, and sometimes without even telling their employer. For an organization to unlock generative AI’s potential at scale, it’ll need to invest time and expertise to research the models available, and then build tools that abstract the tedious work of finding the right prompts (a.k.a. prompt engineering) so the AI’s output is reasonably consistent.
Q: What are the risks in using generative AI?
While generative AI can create text, images and video nearly instantaneously, there are some broad concerns about the content it creates:
Legal: The large language models (LLMs) that power generative AI services use massive amounts of training data, harvested from the open web. While “crawling” internet content is standard practice for search engines, AI services create wholly new content based on that information, typically without linking or even crediting it. That may amount a copyright violation — the basis for The New York Times’ recent lawsuit against OpenAI, the maker of ChatGPT.
Safety: It’s possible to use LLMs, intentionally or unintentionally, in ways that create content that misleads or offends, and may even be criminal in nature (think deepfakes). AI companies usually build guardrails into their models that reduce the likelihood of that happening, but since how LLMs craft answers isn’t fully understood, it’s impossible to eliminate it completely. In addition, overcorrecting for “unsafe” outputs can render the model less useful overall.
Quality: AI tools sometimes hallucinate — essentially make up facts and state them with supreme confidence — which has led to several embarrassing incidents where generative content was published or used without proper vetting. These cases emphasize the need for human oversight of any AI-driven process (a.k.a. “Human in the loop”). But even when the AI gets things right, the writing can often feel basic and soulless. The desire to improve upon “out of the box” outputs of LLMs is creating demand for skills in fine-tuning and prompt engineering.
The majority of advisors are bullish about AI according to Arizent’s latest research.
OpenAI called the “regurgitation” of NY Times content a “rare bug” in a recently published statement released in response to the NY Times lawsuit against OpenAI and Microsoft.
The importance of good data in, good data out; the SEC announced their X (formerly Twitter) account was compromised after a tweet went out Tuesday Jan 9 stating the spot bitcoin ETFs were approved. They were not. How will AI’s handle misinformation?