JPMorgan Chase CEO Jamie Dimon acknowledges that AI has taken over every facet of our lives and will transform the way we work. To capitalize on the AI wave, the bank has introduced a generative AI tool that “can offer information, solutions, and advice on a topic,” based on a memo issued to employees. The memo, signed by Mary Erdoes, head of JPMorgan’s asset and wealth management business, Teresa Heitsenrether, the bank’s chief data and analytics officer, and Mike Urciuoli, the asset and wealth management unit’s chief information officer, described a “ChatGPT-like product” that would be used in conjunction with Connect Coach and SpectrumGPT to “boost productivity.”
LLM Suite, JP Morgan’s new AI chatbot, will be accessible to approximately 50,000 employees, or 15% of the company. This ChatGPT equivalent is set to assist employees in its asset and wealth management division, according to the internal memo seen by the Financial Times. As Wall Street’s largest use case, LLM Suite was built to combat the infiltration of third-party AI models into its highly confidential system. Financial institutions such as Morgan Stanley have opted for the partnership route with AI adoption by teaming up with OpenAI to support its wealth management division. In May, Dimon alerted investors that “AI is going to change every job.” It may eliminate some jobs. Some of it may create additional jobs. But you can’t envision one app, one database, or one job where it’s not going to help, aid, or abet.”
Due to the sensitive nature of financial information that links tens of thousands of customers worldwide, JP Morgan chose to develop an in-house AI chatbot to assist its research analysts in real-time. JP Morgan employees are prohibited from using mainstream AI chatbots such as Anthropic’s Claude, OpenAI’s ChatGPT, or Google’s Gemini to assist in administrative or technical tasks. LLM Suite is an AI channel that allows JP Morgan users to utilize third-party LLMs within its organizational framework. JP Morgan’s recent AI debut means no information has been published on the model’s hallucinations and its tendency to pass off vague information as definitive facts.