This is a monthly column by CAA Board Member Dan Alvarez, addressing technology issues in the banking world, for non-tech professionals. 
 
Please feel free to contact him at chasealumtech@gmail.com.
 
Click here to read the first column, on artificial intelligence.
 
 
 
 
 
 

#2  Generative AI (October 2023)                                       

 

Thanks for giving this a read! Don’t forget to fill out the brief survey about this month's column – this is only the second piece, and we’re plenty open to feedback and suggestions. 

What is Generative AI?

 

At the highest level, Generative AI (Gen AI) is a type of artificial intelligence that can create new content, such as images, music or text, based on patterns it has learned from existing data. Imagine a computer application that can paint its own pictures or compose its own songs after studying many, many examples. That's the essence of generative AI.

 

There’s a pretty good chance you’ve heard of Gen AI without realizing it. If you’ve turned on the news anytime in the last 10 or so months, you might have heard of Open AI’s ChatGPT, Google’s Bard, Meta’s Llama-2, Falcon 7B…the list goes on for quite a while. While all these names might sound like something out of a petting zoo, the reality is that each and every one of these tech companies, big and small alike, is trying to end up on top.

 

The field of Gen AI is evolving so rapidly that most of these companies are leapfrogging one another and creating bigger and better Large Language Models (LLMs for short, we’ll talk more about this later), which means more competition and constant evolution.

 

Don’t worry, we’re still a long way from this (at least for now).

What is the difference between AI and Gen AI?

 

Both "Artificial Intelligence" (AI) and "Generative AI" refer to computer systems that mimic or emulate certain aspects of human intelligence, but they differ in scope and application.

 
  • Artificial Intelligence (AI): This is a broad field that encompasses a wide range of algorithms and methodologies that allow machines to perform tasks typically requiring human intelligence. This includes learning (adaptation to new information), reasoning (using rules to reach approximate or definite conclusions), self-correction, problem-solving and even, potentially, perception. AI can be found in applications ranging from search engines, recommendation systems and voice assistants to robotics and advanced analytics. A great JPMC-focused example would be an AI model that can predict price changes and trade automatically in response to market changes. These models process vast amounts of data, from price and trading volume to social media sentiment, to make real-time trading decisions.
  • Generative AI: This is a subset of AI. Its primary focus is to generate new content or data based on the patterns it has learned from existing data. Generative AI is more specialized and is primarily associated with content creation, be it text, data, images, music or other forms of media. One of the primary use cases* for Gen AI (and the one JPMC might be interested in) is around document summarization. The model input could be your typical, lengthy legal user agreement and the creative output from the model would be a short easy-to-read summary of that document with the most important portions highlighted. Of course, users would have the ability to click through and read the full length legal doc, but does anyone actually read those?       

* Use cases are concepts employed in software development, product design and other fields to describe how a system can be used to achieve a defined outcome.     

 

What are other common use cases for Gen AI in the financial services industry?

 

Here are some relevant use cases for financial services:

  • Synthetic Data Generation: For institutions that lack comprehensive datasets for training machine learning models, Generative AI can produce synthetic financial data that maintains the statistical properties of real data without compromising sensitive information. This is particularly useful for stress-testing models or systems in scenarios where actual data might be limited. For risk assessment, generative models can create thousands of potential economic or market scenarios to evaluate the robustness of investment strategies or financial products.
  • Customer Experience Personalization: By understanding customer transaction and interaction data, generative models can craft personalized financial product suggestions or generate custom-tailored financial advice or marketing campaigns.
  • Chatbots and Virtual Assistants: Gen AI-driven chatbots can handle routine customer queries to provide 24/7 customer service with limited or no assistance needed from an employee. They can answer frequently asked questions, help with transactions and guide users through various banking processes.
 

Because of the incredibly competitive nature of the AI field as a whole, most of the applicable use cases for JPMC are likely to be kept under wraps until they are fully baked and ready for release. There’s one excellent use case that made headlines back in March around IndexGPT, a tool for helping users select index funds for investments based on their preferences and inputs.

 

If you read last month’s AI piece (particularly the question about AI taking your job), you’ll know that bankers shouldn’t worry about this new platform taking their roles from them, since banking is still largely a relationship business, and the general public is apprehensive about giving their money to different forms of robo-advisors. (I will add this to the list of future Tech Corner columns…)

What is a Large-Language Model (LLM) and how does it relate to Gen AI?

A large language model (LLM) is a type of artificial intelligence algorithm that uses deep learning techniques and massively large datasets to understand, summarize, generate and predict new content. Many of those odd sounding names at the beginning of the column are LLMs (Llama-2 & Falcon 7B), and some are the names of the services that provide LLMs for general use. (ChatGPT, for instance, uses GPT 3.5 LLM, and Bard uses PaLM 2.)

 

A key characteristic of an LLM is that the training to create them is done using truly massive datasets, often in the billions or even trillions of parameters. Parameters are the internal components of a model that are fine-tuned to enable the model to make accurate predictions or generate relevant outputs. They generally represent the learned knowledge of the model: Usually the higher the number of parameters, the more complex the model (and the more expensive it is to run it).

So…what you’re saying is a LLM essentially knows everything?

Author’s note: I went ahead and asked ChatGPT (running the GPT 3.5 LLM) to answer this question. I’ve copy/pasted the answer below and verified it’s correct.

 

No, Language Models like LLMs (Large Language Models) such as GPT-3.5, do not know everything. They have limitations and are not omniscient or all-knowing. Their knowledge is based on the text data they have been trained on, which is typically vast but not exhaustive. These models are trained on a snapshot of the internet up to a certain date (my knowledge cutoff date is September 2021), and they do not have the ability to access real-time information or update their knowledge beyond that cutoff date.

 

Furthermore, LLMs generate responses based on patterns and information present in the training data, but they may not always provide accurate or up-to-date information. They can also generate incorrect or biased information if such content exists in their training data.

 

In summary, while LLMs can provide information and generate text on a wide range of topics, they are not infallible and do not possess real-time or comprehensive knowledge. It's essential to verify information obtained from LLMs with reliable sources and use them as tools for assistance rather than as absolute authorities on all subjects.

 

Thanks, OpenAI.

What are the major downsides to LLMs and Generative AI?

There are many drawbacks to LLMs and Generative AI – I’ll harp on what I feel are the three most serious ones below:

  1. Cost: Complex models require incredible amounts of computational horsepower to train, run efficiently and operate at scale. As of now, the creation and deployment of complex models have outstripped the available capabilities of computer hardware. Companies like AMD, NVIDIA and Amazon have started to develop specialized hardware to tackle this problem, but it’s still very expensive and largely only available to those with the scale and means of purchasing it; by means, we’re talking $$$,$$$ per device.
  2. Hallucinations & Misinformation: Complex LLMs are great at most straightforward, general purpose tasks. But when they do make mistakes, they tend to do so confidently. A great example is asking a question like, “What is the weather in Gotham City?” and then getting an actual response back: “The weather in Gotham City is 58 degrees with a high chance of rain.” Of course, Gotham City is a fictional city, but you were still able to get an answer for it anyway. In the last newsletter on AI, we talked about GIGO (Garbage In, Garbage Out), and LLMs are still susceptible to this. If the data you train the model on isn’t vetted and accurate, you won’t get accurate results down the line.
  3. Data Privacy: Companies with significant publicly available intellectual property do not want their data included as part of the massive dataset used to train LLMs. Companies who also use LLMs for business-related tasks should also be wary that the prompts (input data) that their users provide to the LLM (however sensitive that data might be!) could very well be used to train the next generation LLM against their wishes. Currently, this problem is largely exacerbated by the lack of regulation and transparency in the LLM creation process, but it’s slowly changing to provide legal guarantees to companies who wish to leverage LLMs as part of their business model.

There are other downsides to using Generative AI and LLMs, but I feel these are the most pressing at the moment – along with the other general issues brought forward in last month's column.

 

I promise next month’s piece won’t be about anything to do with AI. Please be sure to give us your thoughts on this month's column! We’d also love to collect future topics to write on, don’t be shy.

 

 

 

 








 

 

About Dan Alvarez

 

Dan Alvarez began at JPMorgan Chase in June 2016 as a summer technology analyst/ infrastructure engineer, and left in April 2022 as a Senior Software Engineer in Global Technology Infrastructure - Product Strategy and Site Reliability Engineering (SRE). Since May 2022, he has worked for Amazon Web Services as an Enterprise Solutions Architect.

     He is also an avid guest lecturer for the City University of New York and has given lectures on artificial intelligence, cloud computing and career progression. Dan also works closely with Amazon's Skills to Jobs team and the NY Tech Alliance with the goal of creating the most diverse, equitable and accessible tech ecosystem in the world.

     A graduate of Brooklyn College, he is listed as an Alumni Champion of the school and was named one of Brooklyn College's 30 Under 30. He lives in Bensonhurst, Brooklyn.

 
----------------------------------
 
Comments?