This blog post is co-written with Siokhan Kouassi and Martin Gregory at Parameta.
When financial industry professionals need reliable over-the-counter (OTC) data solutions and advanced analytics, they can turn to Parameta Solutions, the data powerhouse behind TP ICAP . With a focus on data-led solutions, Parameta Solutions makes sure that these professionals have the insights they need to make informed decisions. Managing thousands of client service requests efficiently while maintaining accuracy is crucial for Parameta’s reputation as a trusted data provider. Through a simple yet effective application of Amazon Bedrock Flows, Parameta transformed their client service operations from a manual, time-consuming process into a streamlined workflow in just two weeks.
Parameta empowers clients with comprehensive industry insights, from price discovery to risk management, and pre- to post-trade analytics. Their services are fundamental to clients navigating the complexities of OTC transactions and workflow effectively. Accurate and timely responses to technical support queries are essential for maintaining service quality.
However, Parameta’s support team faced a common challenge in the financial services industry: managing an increasing volume of email-based client requests efficiently. The traditional process involved multiple manual steps—reading emails, understanding technical issues, gathering relevant data, determining the correct routing path, and verifying information in databases. This labor-intensive approach not only consumed valuable time, but also introduced risks of human error that could potentially impact client trust.
Recognizing the need for modernization, Parameta sought a solution that could maintain their high standards of service while significantly reducing resolution times. The answer lay in using generative AI through Amazon Bedrock Flows, enabling them to build an automated, intelligent request handling system that would transform their client service operations. Amazon Bedrock Flows provide a powerful, low-code solution for creating complex generative AI workflows with an intuitive visual interface and with a set of APIs in the Amazon Bedrock SDK. By seamlessly integrating foundation models (FMs), prompts, agents, and knowledge bases, organizations can rapidly develop flexible, efficient AI-driven processes tailored to their specific business needs.
In this post, we show you how Parameta used Amazon Bedrock Flows to transform their manual client email processing into an automated, intelligent workflow that reduced resolution times from weeks to days while maintaining high accuracy and operational control.
Client email triage
For Parameta, every client email represents a critical touchpoint that demands both speed and accuracy. The challenge of email triage extends beyond simple categorization—it requires understanding technical queries, extracting precise information, and providing contextually appropriate responses.
The email triage workflow involves multiple critical steps:
- Accurately classifying incoming technical support emails
- Extracting relevant entities like data products or time periods
- Validating if all required information is present for the query type
- Consulting internal knowledge bases and databases for context
- Generating either complete responses or specific requests for additional information
The manual handling of this process led to time-consuming back-and-forth communications, the risk of overlooking critical details, and inconsistent response quality. With that in mind, Parameta identified this as an opportunity to develop an intelligent system that could automate this entire workflow while maintaining their high standard of accuracy and professionalism.
Path to the solution
When evaluating solutions for email triage automation, several approaches appeared viable, each with its own pros and cons. However, not all of them were effective for Parameta.
Traditional NLP pipelines and ML classification models
Traditional natural language processing pipelines struggle with email complexity due to their reliance on rigid rules and poor handling of language variations, making them impractical for dynamic client communications. The inconsistency in email structures and terminology, which varies significantly between clients, further complicates their effectiveness. These systems depend on predefined patterns, which are difficult to maintain and adapt when faced with such diverse inputs, leading to inefficiencies and brittleness in handling real-world communication scenarios. Machine learning (ML) classification models offer improved categorization, but introduce complexity by requiring separate, specialized models for classification, entity extraction, and response generation, each with its own training data and contextual limitations.
Deterministic LLM-based workflows
Parameta’s solution demanded more than just raw large language model (LLM) capabilities—it required a structured approach while maintaining operational control. Amazon Bedrock Flows provided this critical balance through the following capabilities:
- Orchestrated prompt chaining – Multiple specialized prompts work together in a deterministic sequence, each optimized for specific tasks like classification, entity extraction, or response generation.
- Multi-conditional workflows – Support for complex business logic with the ability to branch flows based on validation results or extracted information completeness.
- Version management – Simple switching between different prompt versions while maintaining workflow integrity, enabling rapid iteration without disrupting the production pipeline.
- Component integration – Seamless incorporation of other generative AI capabilities like Amazon Bedrock Agents or Amazon Bedrock Knowledge Bases, creating a comprehensive solution.
- Experimentation framework – The ability to test and compare different prompt variations while maintaining version control. This is crucial for optimizing the email triage process.
- Rapid iteration and tight feedback loop – The system allows for quick testing of new prompts and immediate feedback, facilitating continuous improvement and adaptation.
This structured approach to generative AI through Amazon Bedrock Flows enabled Parameta to build a reliable, production-grade email triage system that maintains both flexibility and control.
Solution overview
Parameta’s solution demonstrates how Amazon Bedrock Flows can transform complex email processing into a structured, intelligent workflow. The architecture comprises three key components, as shown in the following diagram: orchestration, structured data extraction, and intelligent response generation.
Orchestration
Amazon Bedrock Flows serves as the central orchestrator, managing the entire email processing pipeline. When a client email arrives through Microsoft Teams, the workflow invokes the following stages:
- The workflow initiates through Amazon API Gateway, taking the email and using an AWS Lambda function to extract the text contained in the email and store it in Amazon Simple Storage Service (Amazon S3).
- Amazon Bedrock Flows coordinates the sequence of operations, starting with the email from Amazon S3.
- Version management streamlines controlled testing of prompt variations.
- Built-in conditional logic handles different processing paths.
Structured data extraction
A sequence of specialized prompts within the flow handles the critical task of information processing:
- The classification prompt identifies the type of technical inquiry
- The entity extraction prompt discovers key data points
- The validation prompt verifies completeness of required information
These prompts work in concert to transform unstructured emails into actionable data, with each prompt optimized for its specific task.
Intelligent response generation
The final stage uses advanced AI capabilities for response creation:
- An Amazon Bedrock agent synthesizes information from multiple sources:
- Response generation adapts based on validation results:
- Specific information requests for incomplete queries
- Comprehensive solutions for complete inquiries
- Delivery back to clients using Microsoft Teams
The following diagram illustrates the flow for the email triaging system.
This structured approach allows Parameta to maintain consistent, high-quality responses while significantly reducing processing time for client inquiries.
Solution walkthrough
Let’s walk through how Parameta’s email triage system processes a typical client inquiry. We start with the following sample client email:
Dear Support Team, Could you please verify the closing price for the Dollar ATM swaption (USD_2Y_1Y) as of March 15, 2024? We need this for our end-of-day reconciliation. Best regards, John Smith Portfolio Manager, ABC Investments |
The classification prompt classifies this as a price verification request based on the content and intent. It uses the email as the input, and the output is type: price_verification_request
.
The entity extraction prompt uses the preceding email, and provides the following output:
{
"product_type": "Interest Rate Option",
"ticker": "USD_2Y_1Y",
"date_requested": "2024-03-15",
"data_source": "ICAP",
"request_type": "closing_price"
}
The workflow then performs validation using Amazon Bedrock Flows. This requires the following checks for price verification:
- Product identifier (
USD_2Y_1Y
present) - Date (March 15, 2024 present)
- Price type (closing specified)
When all required entities are found, the workflow proceeds to the Amazon Bedrock agent.
The agent submits the following query to the knowledge base: “Product specifications and market context for Interest Rate option USD_2Y_1Y.”
The agent submits the following Snowflake query (using Athena):
The generated response is as follows:
Dear John, Regarding your price verification request for USD_2Y_1Y (USD ATM Swaption Implied Volatility Mid 2Y vs 1Y) as of March 15, 2024: Please note this information is provided under your existing data service agreement. If you need any further clarification, please don’t hesitate to ask. Best regards, Parameta Support |
Benefits
Parameta quickly transitioned from implementation to achieving impactful results, thanks to the substantial benefits provided by Amazon Bedrock Flows across various areas:
- Operational efficiency
- Development teams accelerated prompt optimization by quickly testing different variations for email classification and entity extraction
- Time-to-insight reduced from weeks to days through rapid prompt iteration and immediate feedback on performance
- Quick adjustments to validation rules without rebuilding the entire workflow
- Team collaboration
- Modification of prompts through a simplified interface without deep AWS knowledge
- Support teams gained the ability to understand and adjust the response process
- Cross-functional teams collaborated on prompt improvements using familiar interfaces
- Model transparency
- Clear visibility into why emails were classified into specific categories
- Understanding of entity extraction decisions helped refine prompts for better accuracy
- Ability to trace decisions through the workflow enhanced trust in automated responses
- Observability and governance
- Comprehensive observability provided stakeholders with a holistic view of the end-to-end process
- Built-in controls provided appropriate oversight of the automated system, aligning with governance and compliance requirements
- Transparent workflows enabled stakeholders to monitor, audit, and refine the system effectively, providing accountability and reliability
These benefits directly translated to Parameta’s business objectives: faster response times to client queries, more accurate classifications, and improved ability to maintain and enhance the system across teams. The structured yet flexible nature of Amazon Bedrock Flows enabled Parameta to achieve these gains while maintaining control over their critical client communications.
Key takeaways and best practices
When implementing Amazon Bedrock Flows, consider these essential learnings:
- Prompt design principles
- Design modular prompts that handle specific tasks for better maintainability of the system
- Keep prompts focused and concise to optimize token usage
- Include clear input and output specifications for better maintainability and robustness
- Diversify model selection for different tasks within the flow:
- Use lighter models for simple classifications
- Reserve advanced models for complex reasoning
- Create resilience through model redundancy
- Flow architecture
- Start with a clear validation strategy early in the flow
- Include error handling in prompt design
- Consider breaking complex flows into smaller, manageable segments
- Version management
- Implement proper continuous deployment and delivery (CI/CD) pipelines for flow deployment
- Establish approval workflows for flow changes
- Document flow changes and their impact including metrics
- Testing and implementation
- Create comprehensive test cases covering a diverse set of scenarios
- Validate flow behavior with sample datasets
- Constantly monitor flow performance and token usage in production
- Start with smaller workflows and scale gradually
- Cost optimization
- Review and optimize prompt lengths regularly
- Monitor token usage patterns
- Balance between model capability and cost when selecting models
Consider these practices derived from real-world implementation experience to help successfully deploy Amazon Bedrock Flows while maintaining efficiency and reliability.
Testimonials
“As the CIO of our company, I am thoroughly impressed by how rapidly our team was able to leverage Amazon Bedrock Flows to create an innovative solution to a complex business problem. The low barrier to entry of Amazon Bedrock Flows allowed our team to quickly get up to speed and start delivering results. This tool is democratizing generative AI, making it easier for everyone in the business to get hands-on with Amazon Bedrock, regardless of their technical skill level. I can see this tool being incredibly useful across multiple parts of our business, enabling seamless integration and efficient problem-solving.”
– Roland Anderson, CIO at Parameta Solutions
“As someone with a tech background, using Amazon Bedrock Flows for the first time was a great experience. I found it incredibly intuitive and user-friendly. The ability to refine prompts based on feedback made the process seamless and efficient. What impressed me the most was how quickly I could get started without needing to invest time in creating code or setting up infrastructure. The power of generative AI applied to business problems is truly transformative, and Amazon Bedrock has made it accessible for tech professionals like myself to drive innovation and solve complex challenges with ease.”
– Martin Gregory, Market Data Support Engineer, Team Lead at Parameta Solutions
Conclusion
In this post, we showed how Parameta uses Amazon Bedrock Flows to build an intelligent client email processing workflow that reduces resolution times from days to minutes while maintaining high accuracy and control. As organizations increasingly adopt generative AI, Amazon Bedrock Flows offers a balanced approach, combining the flexibility of LLMs with the structure and control that enterprises require.
For more information, refer to Build an end-to-end generative AI workflow with Amazon Bedrock Flows. For code samples, see Run Amazon Bedrock Flows code samples. Visit the Amazon Bedrock console to start building your first flow, and explore our AWS Blog for more customer success stories and implementation patterns.
About the Authors
Siokhan Kouassi is a Data Scientist at Parameta Solutions with expertise in statistical machine learning, deep learning, and generative AI. His work is focused on the implementation of efficient ETL data analytics pipelines, and solving business problems via automation, experimenting and innovating using AWS services with a code-first approach using AWS CDK.
Martin Gregory is a Senior Market Data Technician at Parameta Solutions with over 25 years of experience. He has recently played a key role in transitioning Market Data systems to the cloud, leveraging his deep expertise to deliver seamless, efficient, and innovative solutions for clients.
Talha Chattha is a Senior Generative AI Specialist SA at AWS, based in Stockholm. With 10+ years of experience working with AI, Talha now helps establish practices to ease the path to production for Gen AI workloads. Talha is an expert in Amazon Bedrock and supports customers across entire EMEA. He holds passion about meta-agents, scalable on-demand inference, advanced RAG solutions and optimized prompt engineering with LLMs. When not shaping the future of AI, he explores the scenic European landscapes and delicious cuisines.
Jumana Nagaria is a Prototyping Architect at AWS, based in London. She builds innovative prototypes with customers to solve their business challenges. She is passionate about cloud computing and believes in giving back to the community by inspiring women to join tech and encouraging young girls to explore STEM fields. Outside of work, Jumana enjoys travelling, reading, painting, and spending quality time with friends and family.
Hin Yee Liu is a prototype Engagement Manager at AWS, based in London. She helps AWS customers to bring their big ideas to life and accelerate the adoption of emerging technologies. Hin Yee works closely with customer stakeholders to identify, shape and deliver impactful use cases leveraging Generative AI, AI/ML, Big Data, and Serverless technologies using agile methodologies. In her free time, she enjoys knitting, travelling and strength training.
Source link
lol