Leon Hagopian, Regional Head of Digital Practice|Digital Transformation, UBS
How you use the data that enables you to hit your digital transformation targets!
Every demand that comes across my desk, the first question I ask is the data available? In what format is the data available? Can we easily access it? If we can easily access it, can it be integrated into our Artificial Intelligence Analytical tooling (AIAT). If the answer to the above questions is a no or a nervous yes, then ultimately the execution will take longer and, as a consequence, the cost will be larger than forecasted. I have countless examples where the programming, the machine learning algorithm development and the testing/productionisation took less time to execute than it took to access the data.
As the Regional Head of the APAC team that delivers emerging technology solutions for UBS, I lead a large and strong team of AI consultants, Data Scientists and technical delivery specialists who provide expertise & advisory services in emerging technology. We drive programs from conceptualization to execution, including assessing all opportunities, analyzing the requirements, torturing the data and then developing and delivering the technical solution F2B.
Our recommendations are vendor agnostic. Our resources are technology agnostic and our solutions are location agnostic. Why? Because our only dependency is data.
There are a number of theories that you must use a particular vendor to deliver an optimal solution. Whilst some vendors ultimately deliver a better outcome than others, typically this is not the major issue. Others point to legacy infrastructure and applications not built for scalable and sustainable digital transformation and others point to culture. Whilst all these add weight, the ultimate and major driver to a successful, scalable digital transformation outcome is accessible to data in a timely and applicable manner, and the ability to use it. As Ronaldo Coase wrote, ‘Torture the data, and it will confess to anything’.
Imagine the actionable insights we could gain if we could replicate the Netflix or YouTube recommendation engines with our clients. Imagine one innocent client viewing of a content article on your internet page can be transformed into a proactive contact by one of your advisors which, ultimately, transforms into a sale. This is what Netflix does to ensure your monthly subscriptions remains. This is not rocket science. They have immediate access to data via a central repository/warehouse which provides their technical and business development experts to analyze and recommend insights instantaneously.
Thanks Einstein, for the tip, I can hear you saying. Give me solutions that apply to organizations with 60 years of legacy applications, do not sit on the cloud and are housed on legacy infrastructure. How do we access this data?
Firstly, what is the definition of access to data? Large, global organizations, whose core business operating model is not technology, ultimately have siloed vertical applications sitting on infrastructure stacks which don’t talk/integratewith each other. Whilst the migration to the cloud will mitigate some of this, the immediate goal of such an organization is to ensure that a data information warehouse or “data lake”, which it is commonly known as, is developed which enables real time access to data.
This is not a small piece of work but if done methodically and simplistically can deliver amazing and immediate results for the organization.
So what are the steps to achieve this?
1.Build a team with some super- strong data scientists.
2.Understand the historical data points across the various verticals that you require access to. Your analysis should consider:
a.Which data sets provides you the best level of actionable insights that is required now and, more importantly, as part of your strategic roadmap in the future.
b.Understand how you need to pull the data (particularly from legacy applications). You might need some gool dol-fashioned managed file transfer ’s or robotic process automation bots to do this.
c.Consider integration points that data warehouse is required to have with theAIATyour organization has purchased
d.What are the “running costs” (storage, retention)?
e.A fail fast, learn fast mentality ensuring your team are looking for strategic and scalable gains
3.Integrate the in scope datainto a shared data layer/warehouse including regularity of feeds.
4.Develop automated integration of the data into your AIAT tool of choice where your data analytics work will be undertaken.
5.Torture the data!
6.Develop and trial a Netflix Recommendation model POC against a use case
7.Constantly refine and refresh the thresholds to get the ultimate and desired precision
Whilst the above sounds difficult and complex, if done correctly it will ensure it is scalable for other use cases. The data is reusable and so is the algorithm (to an extend)
Organizations purchase very expensive tooling, sometimes with a misconceived thought process that it will automatically resolve all challenges. That it will work by itself. Purchasing the expensivetooling and playing around with it is fun. It is required, but understand how the data is going to be used. How will the data be collected? How will it be cleansed, processed, automated, explored and integrated. Once you have worked this out, then the cool NetFlix Recommendation engines will surely follow.
Remember, data first!
As a famous football manager once famously said about a mis-firing player, which I have now applied to data. “What’s the use in having a Ferrari (cool- AI predictive analytics tooling) if you don’t know how to drive it (no or inconsistent data)”.