Increasingly you hear the terms Two Speed, Bi-Modal or Multi-Speed IT and sometimes it is described as friction between business and IT.
In simple terms, multi-speed IT, is the recognition that business users, data scientists and developers want to be able to rapidly experiment with information to either:
- Validate a hypothesis;
- Investigate a specific anomaly;
- Build new insight;
- Build new capabilities;
- Build new customer facing products;
When these persona embark on a task they may not be sure that what they are experimenting with will actually be of use, they may have time pressures that they cannot wait on, or could simply not want to go through the exercise of getting a project funded through the normal IT process.
Effectively what we are seeing is these users increasingly are demanding a self-service model that does not require interaction with IT. However, to be effective, they need to connect with and build off of capability that is provided or managed by IT or have to work within the confines of a governance or compliance policy.
Knowledge workers have been adopting a new class of agile tool set to satisfy the goals outlined above. These tools range for true self service analysis tools for the Citizen Analyst such as Watson Analytics, to more traditional reporting/BI capabilities to self service data preparation or wrangling tool sets. They have become more and more prevalent within all class of business from large to small.
There are also constant stream of new companies and offerings in this space.
The challenge with the above lays with the conflict between what the Self Service User and Self Sufficient Builder are looking for and the challenges of traditional IT, or the Capability Buyer. Traditional IT is very much focused on the following characteristics and problems:
- Driven by budget and cost;
- Ongoing cost of managing, maintaining systems and the acquisition cost of infrastructure and services and fixed cost of ongoing improvement to these systems;
- Have to manage the System Level Agreements of the systems (availability, response time, and cost);
- New work is driven by projects or by specific requirements which can be self initiated or driven from the business side;
- When it comes to data and analytics there is a strong focus on defining the data model and the ETL required to populate the data model;
- Issues around governance, compliance, trust and confidence, data quality, data classifications, policies around data are very prevalent;
- Large focus on ensuring the project can be run in a production environment with a strong focus on satisfying the overall System Level Agreement/Objective;
- Projects will take multiple weeks to months and years, generally the months and year’s side is prevalent.
The flip side to the above is the goal of the knowledge workers, which require a much greater degree of agility. When they start on a task it is often driven by some hypothesis and they are not always sure if this will be valid or drive to insight that will provide long-term benefit. Delivery of projects is measured in days to weeks and anything longer is simply not realistic. This fast and agile paradigm requires:
- Ability to find data relevant to the hypothesis;
- Initial hypothesis validation before provisioning data;
- Provisioning and shaping/preparing data into a form that will be well suited to further discovery;
- Discovery over data which will include further shaping and analysis that will build out a specific analysis or report;
- Collaborative model that allows sharing with a group;
- And all of this with self-service; low barrier of entry, frictionless.
Many of these projects are ad hoc, one off tasks or tasks that will be used on an infrequent basis, and that is for those that actually lead to real insight. In many cases the hypothesis is not proven out. We are starting seeing a model with 1000s of ideas leading to 100s of useful insights where 10s may be ongoing critical new business insights, which have to be deployed into the more formal business processes.
The fundamental question we have to ask is how does a company provide the supporting technologies around data and analytics that enable:
1. Self service for data knowledge workers
2. Self service for builders
4. Support for centralized management of the actual production systems, which are derived from the capabilities built by the self-service knowledge worker or builder and also solutions.
It should be noted, that when we talk about this central management of systems it does not imply that you have to have a central IT team that actually manages individual systems. That can, and often will be handled through managed cloud services. But you do need to have a group that will ensure the architecture of these systems can satisfy the required SLA levels (again response time, availability and cost), and also satisfy governance and compliance around data and the analytics, the life cycle of data, the life cycle of analytic models etc.
Going forward the industry is going to evolve to be able to support a self service paradigm that has a strong collaborative linkage with the more traditional IT capabilities to ensure not only satisfy challenges such as regulatory compliance, but also to ensure that there is strong trust and confidence in the tasks being executed by the self service persona.