A Community Bank’s Data Was Everywhere. Now It Goes to Work
Data everywhere. Decisions going nowhere.
This full-service community bank had the ambition to become genuinely data-driven. The problem wasn't intention — it was architecture. Data lived across a patchwork of disconnected source systems: Abrigo, H-360, Baker Hill, and CSV exports that nobody fully trusted. Reports took too long to produce. Analysts spent more time reconciling numbers than reading them. Decisions that should have been data-driven were still running on instinct.
The bank's leadership knew the status quo was costing them — in efficiency, in risk, and in the intelligence they needed to compete with larger institutions. What they didn't have was a clear, credible picture of where to start.
The core challenge: Fragmented data assets across multiple core banking platforms with no unified lineage, no trusted ETL process, and no architecture that could support modern BI or future AI initiatives.
You can't fix what you can't see. And they couldn't see much.
Before any transformation could begin, the bank needed clarity. Clarity on what data they actually had. Clarity on where it lived, how it moved, and what depended on what. And clarity on the gap between what they had and what their ambitions required.
Three problems ran beneath everything else. First, nobody had ever catalogued the data estate in full — assets were assumed to exist, not documented. Second, the dependencies between systems were tribal knowledge: people knew how the data moved because they'd watched it move, not because it was recorded anywhere. Third, there was no ETL strategy. Data was extracted ad hoc, transformed inconsistently, and loaded into systems that couldn't talk to each other. The foundation for any BI capability simply wasn't there.
The bank couldn't move to modern BI — or toward any AI ambitions — without first answering a basic question: do we actually know what data we have, and can we trust it?
Start with the inventory. Build toward the architecture.
OnStak ran a structured BI and Data Modernization Assessment — designed not to produce a glossy slide deck, but to give the bank something they could act on: a complete picture of their current data estate, a prioritised roadmap for ETL implementation, and a target architecture that their team could actually build toward.
The engagement ran in four phases, each feeding the next. Discovery came first — understanding the business context before touching a single system. Data collection followed, cataloguing every asset, every source, and every dependency. Report creation synthesised the findings into something the leadership team could use. Analysis and review closed the loop, pressure-testing the recommendations against operational reality.
Existing Data Architecture Review
Detailed mapping of current data systems across all source platforms — Abrigo, H-360, Baker Hill, CSV exports, and PDF files. Documented what existed, how it connected, and where the trust broke down.
Full Data Inventory
Comprehensive cataloguing and categorisation of all data assets across the bank. For the first time, leadership had a complete, authoritative list of what they owned — and what shape it was in.
Dependency Mapping
Identification of every interdependency between data sources — surfacing the hidden connections that had previously existed only as tribal knowledge. The map that makes safe change possible.
ETL Implementation Roadmap
A strategic, sequenced plan for implementing Extract, Transform, Load processes — moving from the current patchwork to a governed, reliable data pipeline. Prioritised ruthlessly: what to build first, what to defer, and why.
From fragmented sources to a unified data platform.
The proposed architecture gave the bank a clear destination. Source systems feed a centralised BOP Data Platform through four ingestion patterns — Event Streams, Push, Batch Loads, and CDC Pull. Within the platform, an ETL layer feeds a Data Warehouse and Data Lake, all governed by a unified security and monitoring layer. Data consumers connect via BI tools, data apps, and AI use cases.
A clear picture. A credible path. A foundation worth building on.
The assessment gave the bank's leadership team something they hadn't had before: the complete, honest view of their data estate they needed to make confident decisions. The outputs weren't advisory abstractions — they were operational artefacts the bank's team could take into the next phase of work.
"Most organisations spend years in the dark about their own data. This engagement compressed that clarity into weeks — giving the bank the map and the mandate to build something real."
OnStak Data PracticeModern tools. Proven patterns. No vendor lock-in.
Every technology recommendation served the architecture, not the other way around. The stack was chosen for the bank's scale, their team's capabilities, and the AI ambitions they'd articulated — not because it was fashionable or because a vendor paid for the mention.