Maximize automation value to restore the human perspective

Apply automation in risk analytics platforms to reduce manual data preparations and technical debt

Reduce manual data preparations and technical debt with automation in risk analytics platforms

photo credit john-allen-QMaz1luQc24-unsplash
Retain the human perspective on data for improved analytics insights

The nature of risk analytics will always equate to an environment of quickly changing demands. Unpredictability and volatility of external factors and a need for rapid and agile adaptation work against the need to produce accurate and informed forecasts. Delivering these results often forces enterprises to revert to manual interventions in input data generation processes.

But wait a minute: why are we still wasting our human effort on hand-tooling work arounds when we would rather apply our passion, talents, and unique institutional knowledge to make the most of our analytics assets? Sadly, the time we think we save with quick adjustments only introduce less-than-helpful human error and add up to technical debt with processes that are over-managed at the expense of real insight yields.

We all know how it begins. One simple adjustment to input data quickly becomes a series of manipulations and before you know it you have an entire work week or more devoted to manual preparation before you can run any models. Then comes the inertia stage when you’re forced to ignore or delay even more adjustments that you are certain will lead to more reliable forecasts. Whether it is tweaking a few cells or entire fields, you’re introducing layer upon layer of risk for human error.

One natural course-correction is adding rigidity to the input data preparation process. This also adversely impacts the human-driven insights that the process is intended to foster in your organization. The outcome: a reduced level of confidence in your predictions. Whether you or someone on your team is in the position to require manual manipulation of data, taking the step to eliminate past and future analytics technical debt and human error calls for a new habit: an automation-first approach to external data preparation.

Developing and implementing robust automation and change control systems can feel like an overwhelming and challenging endeavor, but the pay-off can be profound. Equipping a risk analytics team with tools that can efficiently handle a wide variety of change requests will instill greater confidence in the team and gain back more time for the organization to assess and react to ever-changing economic environments and demands. By removing the potential for human error, you create the open space required to leverage the human perspective.

Automation replaces debt with know-how

Control and audit requirements for risk platforms make them particularly susceptible to technical debt accumulation. Working in rigid, inflexible platforms, many teams create and deal with technical debt, as well as accumulate technical debt over time. When a person or an entire team’s responsibilities revolve around adapting to manual workarounds, data preparation, and verifying they have enough evidence supporting these interventions to satisfy auditors, it’s time to assess how you value your team’s expertise and time.

Building platforms with automation in mind at each point of entry and exit can replace time spent working on the platform with time spent gathering evidence and insights. More time invested in the substance of data being ingested and how best to incorporate institutional knowledge about your data sources and portfolios into analytics can lead to higher model accuracy. Leveraging automation in risk preparations can both reduce the need for qualitative adjustments and their justifications, resulting in more polished and easily prepared deliverables supporting outputs such as earnings and disclosure reports.

As we discussed in our previous CRC blog, some preparation can be moved entirely out of the generation process by moving the adjustments into the models’ logic. Like it or not, there are going to be data prep processes for auxiliary data. This blog will focus specifically on economic data preparation as an example for automation, though we still see a wide variety of scenarios where manual data preparation will likely be the norm for the foreseeable future as part of continuing to run your models, including:

• Economic
• Coefficients – based on historical data
• Account attributes
• Validation

Common manipulations of auxiliary data

Most if not all models require supplemental economic data; you either leverage data from a source such as Moody’s or create your own forecasts. Risk analytics in the financial services industry rely on both historical and forecasted economic conditions whether from third party providers or your own enterprise. But the time it takes for these production models and data to reflect market changes will not always be sufficient for your organization’s reporting needs. Here are common reasons organizations manipulate auxiliary economic data:

• Blending scenario weightings
• Combining multiple sources into a single dataset
• Hand-picking scenario:mnemonic combinations
• Changing reversion periods
• Changing conversion methods of forecast periods
• Replacing expired variables/indexes
• Reflecting new crisis and/or upside scenarios (i.e. COVID, Russia/Ukraine conflict, other)
• Variable transformations

With such a broad swath of purposes – either with urgency or complexity attached – it’s not surprising that manipulation can be deeply entrenched in an otherwise rigid process. Staying swamped in the manual, dirty, and dull processes tied to external data preparation and the analytics required to generate insights is a recipe for failure.


Methods for automating economic data generation

Getting historical and forecasted macroeconomic data prepared for model runs can be tedious, and there are many situations that could require adjustment to the preparation process. These situations could include variable naming changes due to replaced or updated indices in data from a Moody’s or other similar subscriptions, new variables required for new or changed models, manipulations to business-as-usual forecasts at the variable and monthly level or changing vintages for specific scenarios. The list of use cases is a long, and with it can come the temptation to stay the course with ‘minor’ tweaks here and there make it fit your goal and timeline.

Picture this: instead of push back and excuses for not reacting to a situation, you’re reviewing activities in your applicable markets and coming up with appropriate response to unanticipated situations with meaningful adjustments that make your forecasts more reliable. What’s a good way to make that shift?

We recommend you start by automating the extraction process. Depending on your specific solution implementation, you may want to transform your variables for model usage before or after the economic data is loaded into your analytics platform. If your organization is getting economic data from a third party, we recommend integrating with that provider’s API. This will be the most efficient way to extract this data though some of our clients have been unable to get the sign-off from internal IT partners to configure this integration.

While it’s ideal and we work with clients to gain IT stakeholder participation, there are many ways to work around these technical challenges. In one circumstance we enabled the analytics team with utilities they could use on-demand to ingest final economic data inputs.

Automating ETL is a great starting place for bridging gaps in risk analytics

If you know the full scope of the changes you need and are willing to address, then building a user interface to apply these changes can be a great way to minimize time spent on change. However, rarely does an organization know the full extent of adjustments required to see meaningful impact. To accommodate the unknown, we find success in the balance between allowing some manual manipulations while still fitting requirements of automated processes. We may need to allow some work in Excel, for example, but the automated data generation process requires a specific data model, incorporates error handling, produces the data required for downstream models with reliability, and records necessary audit and control information. With these modifications in place the time spent preparing and validating the adjustments is reduced to minimal human effort.


Reep the rewards of automation

Automating data generation and modification processes is a great way to stay nimble, engaged, and innovative. Running a successful financial services organization requires informed decision making. We can never anticipate every question that will need to be answered, so long term success of informed decision-making processes is founded on both efficiency and agility.
Think of all the context you could add to presentations and reports if your time was valued more for gathering evidence as opposed to just preparations. It could be as simple as a phone call into someone in finance or accounting to better understand what bankers are seeing in the field with their customers or calling a peer at a different organization to understand how they’re planning to address a similar situation.

The alternative reads like this: In the case of one client who hired a recent masters graduate, the majority of job responsibilities quickly turned into keeping track of all the versions of a single economic dataset that the organization was evaluating on a monthly basis. The manual nature of the processes they set up around tracking, analyzing, and manipulating this single component of input data, led to preventable errors, and extra time spent triple-checking their work. Needless to say, it wasn’t the dream of the new hire or the outcome the company hoped for when they expanded the team.

When you free up time for people to engage with the specifics relating to risk forecasting that spark their areas of interest and true enjoyment, the results will delight you and your organization.


Adjustments are the constant

There will always be unforeseen circumstances that impact your modeling needs. Where the standard data requires manipulation, even a simple template set up for incorporating controlled adjustments to economic input data will set your team up for success.

Global events such as the COVID-19 pandemic and the Russia-Ukraine conflict, national interest rate changes, or even weather or industry-related metro-level events shouldn’t require platform limitations into how you want to handle these unforeseen market/portfolio changes. There are methods for adapting to these circumstances in the most appropriate manner and with confidence that the desired approach was implemented completely and accurately.

For example, we may need to incorporate economic scenarios that are specific to one or more of these events, that are out of the typical scope or subscription of activity. By setting up automated economic data preparation for our clients, our partner organizations handpick specific variables and/or blended multiple versions of these scenarios into their exposure forecasts allowing for the most appropriate action to take at that time. This saved them time and risk in their qualitative factor adjustment processes.


Don’t forget the breadcrumbs trail: auditing and logging musts

An additional and critical component of a successful automated process is the capability for leaving ’breadcrumbs’ with information such as:
• Run dates and times
• Completion dates and times
• User(s)
• Full logging of data sources, business logic, and output data
• Automated reporting/analysis of input and output data

When you can answer important questions about the analytics and/or data origins – who ran the process, when did the process run, when was the last run, with what parameter settings – with system-generated outputs that offer greater peace of mind knowing you covered your bases from an audit perspective.
Are you ready to automate?

Making the shift from manual data prep to a more automated approach isn’t a one-and-done exercise; great organizations are continually watching for signals in the market that they need to factor into their business and risk strategies. But as the saying goes the change doesn’t happen without finding a place to start. A good risk analytics team is motivated first and foremost to automate where the burden of manual oversight is replaced with the joy of adding their human point of view on data for insight and innovation. And a great risk analytics organization will only get better over time as they gain confidence in their ability to automate where the gains are greatest and manipulate only where absolutely necessary. It is important to maintain balance between flexibility, control, and automation, because when your organization is nimble, reducing the time required for implementing improvements will help keep the human perspective alive in analytics.


The intersection of math, economics, and athletics spark Austin’s passion for analytics in the world of finance. Austin is passionate about advising and implementing modernization and risk analytics strategies that perform well and enable organizations with the tools and understanding necessary to meet the dynamic demands of modern data and analytics. Connect with Austin on LinkedIn or connect with Corios to kick off a project.

Compliance Analytics Gaps: How to Fix Bad History with Novel Data Bounding

Close Gaps in Compliance Analytics with Data Bounding

It feels like with each passing year the stakes rise for predicting and managing risk. Does the universe need to keep dishing us market, environmental and health crises when regulated industries are amid massive digital transformation and analytic platform modernizations?

The insurance industry is facing down the compliance deadline for the FASB’s Long Duration Targeted Improvements (LDTI). Public carriers should be on the cusp of final transition to running new models developed over the past two years.

Similarly, most financial institutions should have their Credit Expected Credit Loss (CECL) compliance transitions in the rearview mirror. However, many credit unions have yet to reach key milestones toward implementation of the new credit standard.

And for private and smaller entities just now converting new roadmaps and processes into trial implementations, what are the lessons learned from their larger counterparts to help smooth the process?

Timing, business considerations and the scope of modernization may vary among companies impacted by new regulatory standards, but there is one unavoidable similarity: all will surface analytics gaps. Deadlines can’t be missed, but issues must be addressed. All that’s left is determining how and when.

Read More

Picking Up the Jellyfish

Modernizing analytics practices for 800+ insurance analysts

We recently participated in the FIMA Boston conference, one of the many springtime financial services industry events showcasing data, analytics and AI innovations on the digital transformation odyssey. And the single loudest takeaway from these events regarding this journey we’ve all been on now for nearly a decade is that it’s a journey with no end.

No longer viewed as a destination, transforming enterprise analytics is a virtuous cycle of data decisioning and predictions, governance and security that drive greater transparency and fluidity in our pursuit of analytics excellence in the cloud. Yet despite the jellyfish-like squishiness and uncertain risk for pain, there’s more optimism and a sense of clarity found in a more well-worn path for modernizing data analytics assets.

A look back at our work so far with customers in insurance and financial services highlights captivating insights learned in their legacy SAS asset transformations. In one assessment alone we discovered that only 30% of analysts were actively developing on the platform, a percentage of those were exporting the data rather than leverage it directly in the warehouse, and nearly 20% created major security risks placing open text passwords in their code.

Which highlights the importance in our second big takeaway from discussions with the C-suite and down the command line: it’s beyond time to get hands-on and transform workloads that are more and more at odds with the permeating enterprise data strategy in the cloud. Now that the broad infrastructure and processes are in place, all eyes and budgets must focus on decades-old methods and platforms like your legacy SAS workloads that today mostly encumber the people and processes tied to their analytics advantage.

Finding the edges of the jellyfish without getting stung

Corios was hired by a prominent insurance carrier to modernize their analytics and data practices for all things analytical: underwriting, pricing, claims, repairs, coverage, compliance, and regulatory support. They wanted to reduce the cost of data storage, to align all their analysts on a consolidated set of tools and environments, and to modernize the enterprise so they could react to climate events and other large-scale adverse events faster and more efficiently.

Read More

What’s in your SAS code? Find out before you modernize

“I have 4,000 SAS users and 7 million lines of SAS code, and nobody knows what it all does.”

These are the words of the Chief Data Officer of one of our insurance industry clients.

And the CDO isn’t the only one in the C-suite interested in discovering what’s in their SAS code. It’s a ‘must-know’ for data and analytics transformation to leverage advantages with open source and cloud investments. Our key stakeholders at companies across banking, insurance, manufacturing and energy utilities are all asking the same question:

How do we tackle SAS modernization as part of our digital transformation?

But before you can answer the ‘what they do’ question about your SAS workloads it’s valuable to know the scope and depth of SAS in your enterprise and the impact all that code is having on your operation. Some wonder if this discovery is even possible; we know the key is establishing a ground truth baseline of existing SAS assets – information which is often hiding in plain sight.

Corios Rosetta Scanner helps clarifies SAS usage for the C-Suite

In the summary video below (appx 13-minute run time) I’ll walk you through some concrete examples of Corios Rosetta scanner tactics we use to quantify and qualify the value of your SAS analysts, data and workloads. Here’s a brief snapshot of key takeaways for the CIO, CDO, CCO and CAO:

Critical SAS modernization targets of the Chief Information Officer (CIO)
  1. Reducing expensive tier one storage costs on data analytics platforms by as much as 50-67% by moving data files that haven’t been used by workloads in the past 12 months.
  2. Identifying current and potential business wins supported by data and analytics platforms by answering the ‘who’ question among users that highlights your most productive and efficient SAS users – stakeholders who can help inform the modernization effort.
  3. Building a roadmap to integrate analytics with open source and cloud platforms by isolating the workloads with the highest performance and business payoff when aligned to modern code languages like SQL, Python and Spark.
Critical SAS modernization targets for the Chief Data Officer (CDO)
  1. Identifying and mapping all enterprise data used by SAS analysts to comprehend business impacts like understanding where a producer’s 42 data tables are being used in over 450 workloads.
  2. Producing real cost efficiencies by reducing replication/duplication of data sources via a deeper understanding of best actions to take to enhance effectiveness and security related to how the data is produced and consumed.
  3. Gaining data storage efficiencies by knowing what enterprise repositories and end user libraries benefit from relocation to data lakes or strategic databases and isolating your ‘rogue or rookie’ data users that require training to become good data stewards.
Critical SAS modernization targets for the Chief Compliance Officer (CCO)
  1. Recognizing personally identifiable information (PII) otherwise hard to see in SAS files without tools like Rosetta that suss out classifications like names and account numbers in user data.
  2. Reducing human-introduced risks by nature of analyst work methods and behaviors by identifying open text passwords, undocumented code and large code files.
  3. Mitigating the risk exposure of under-documented and un-validated decision models, often called “End User Computing” or EUC instances.
Critical SAS modernization targets for the Chief Analytics Officer (CAO)
  1. Identifying high value analytics workloads and characteristics of users who build them by scoring such stats like:
    • frequency of results used by others in the organization
    • the number and volume of data sources used to build them
    • the analytic intensity involved in the builds
    • the skill level of analysts both building and using them
  2. Quantifying business value and compliance risk exposure of high value machine learning (ML) assets in an organization.

There’s something for everyone to discover about SAS that will enhance C-suite planning for strategic transformation into a data insights-driven business. When you’re ready to leverage the power of Rosetta to tackle your SAS modernization give us a shout to discover more about your SAS code.

Corios CEO Robin Way shows how Rosetta Scanner helps the Chief Information, Data, Compliance and Analytics Officer with modernization

Count Us In on More Data Analytics

Corios transitions from pandemic to post-recession data analytics storytelling

What a weird several years we’ve all experienced. The pandemic presented all businesses with unique circumstances to overcome. It is possible the biggest challenge was staying the course in data analytics transformation efforts while pushing all employees into the digital landscape full time. Even as we managed to keep Corios working through COVID, we chose to stand up to adversity and consider our options beyond simply riding out the turmoil.

Would we be satisfied with the pre-pandemic status quo? Or would we instead choose to mature and grow our management analytics business?

When 2022 arrived, we chose evolution of the Corios Way. We retained our core team throughout the pandemic, so, instead of a rebuild, our expansion is underway with purpose and a recommitment to our storytelling vision.

We are still here, simplifying the complex in data strategy and humanizing the mechanical in analytics for high business value. And there is so much more to do.

Way beyond Portland: Living values in hybrid mode

The pandemic put a spotlight on heavy culture shifts for a lot of firms. Work from home, remote connectivity and the cloud were not yet widely adopted or operational. Our own dedication to the cloud for us and our clients – in SOC2- and PCI-compliant management controls, made for a smooth shift to Work-From-Wherever.

Now, as some of the “old ways” of office culture are coming back, not only are we celebrating team events in person in our Portland HQ, but we are also adding work from work locations in new markets. In October we opened a new office in Denver, where Austin Barber leads our Credit Risk and Compliance practice. In 2023, we will continue to explore where in the U.S. we can put down more roots as we expand the team and serve new clients.

Read More