Our Blog



Data & AI: top six talking points from our recent breakfast event

Data is the new currency of the modern digital world. Being able to gather, store, transform and visualise data in an actionable way is critical to drive operational excellence, grow revenues and improve customer experience.

Data & AI platform: top six considerations

With that in mind, we recently had the opportunity to host a breakfast session to discuss how clients can take advantage of Microsoft Azure Data Services, reaping all the benefits of modern cloud-base data analytics tools.

The session, led by our own Dan Smith (National Head of Data & AI) and Sam Steere (Senior Consultant, Data & AI), covered other topics such as how to define a cloud-first data architecture for your organisation and data security.

Here are the six top-of-mind topics discussed during this session.

What are the key economic benefits that organisations get from cloud adoption?

The main benefits here is the separation of storage and compute using hyperscale cloud technology. In on-premises world, you must create infrastructure that handles maximum capacity, with its associated high-availability (HA), Disaster Recovery (DR) and management overheads. The cloud enables you to stand up for what you use, and switch off those resources when not needed.

Azure Data Bricks is a classic example of a technology that lays dormant when not used, and then rapidly scales up in seconds when called upon. This consumption-based model is becoming more and more acute as technology evolves.

Experimentation is also enabled more freely in cloud, because analysts and developers can spin up playpens with huge data sets to explore its value, then switch it off when not needed. The more you leverage Platform-as-a-Service (PaaS), rather than Infrastructure-as-a-Service (IaaS) lift-and-shift, the more economical the cloud becomes. No more patching, monitoring & upgrades. You simply stand up the service and manage the evergreen nature of the platform.

What are the key operational benefits that organisations get from cloud adoption?

When designed correctly, you no longer need to ask the question ‘how much data do we bring in?’. The simple answer is that you take it all and then work with it later. This is a revolution in data accessibility and timeliness enables use cases that were previously not possible.

Imagine a website or customer portal that needs to display real-time reports to users, or requires real-time data to process transactions (scenarios such as spot-price commodity trading, incident response, or click-frenzy ecommerce situations).

Data scientists can trawl, crunch and analyse data-sets at speeds never seen before. No longer do they need to wait hours or weeks for large queries to run, no longer to they need to make requests for data points to be available, and no longer do they need to wait for quiet periods to lessen the impact on source systems.

How do organisations handle ‘firehose’ of changes coming in cloud platforms?

Evergreen cloud platforms challenge an organisations operating model. Gone are the days of running a project, putting it into a ‘run and maintain’ state and then planning a major upgrade project in five years. Once you are in the cloud, you are always ‘on the edge’.

Rather than being a threat, this is an empowering position. It pushes you to operate in a continuous and incremental change mindset. IT and the business users work hand in hand in order to absorb and capitalise on the innovations.

When you consider the changes that Microsoft makes, it’s reassuring to understand that changes seem to happen more at the ‘bottom’ of the technology stack, rather than impacting the end users dramatically and on a regular basis.Empired has oriented out services around this new evergreen model and we refer to this as Continuous Service Evolution (CSE). It’s the best of the worlds of project, managed service and agile delivery.

How do you build trust in the data when moving from old to new, or using Artificial Intelligence (AI) models?

In the AEMO case study, our biggest challenge was people. People needed to be certain that the information being provided in the cloud solution was equal to, or more accurate to what they were producing on-premises. Building trust was the most important aspect of this transformation and as such change and adoption should not be underestimated in any platform shift. People need to be continually reminded of the benefits and outcomes being sought, so when the going gets tough, then can see the ‘why’.

In the vision AI demo shown during the session, it became apparent that some degree of subjectivity comes into play when dealing with a vandalised NBN box, as opposed to a clearly damaged box. These thresholds, and the business process that sits behind the AI supporting the process, is a critical people and process consideration.

Is blob storage necessary for low-volume, well-structured data sets?

Many of our examples involved having data land in blob storage (e.g. Azure Data Lake) before being transitioned to a Data Warehouse. Blob storage isn’t necessary if you’re coming from well-structured, low-volume data sources for which you are not concerned with performance impacts or the need to transform the data. This highlights that value of being able to consume individual services at consumption prices, and only pay for what you need. No data pipeline reference architecture is the same, but as we see data volumes increase, and availability SLAs decrease, we tend to see blob storage becoming more popular.

What are the merits of cleaning the source before you put through ETL?

If you are in a position to improve quality at the source without impacting performance, you should absolutely consider this option. The issue we see is that running data quality services at that point creates a performance hit on the source, and also requires local infrastructure or agents to run. And when the source is a cloud-based SaaS service, this may not even be an option for you.

There are much more to cover but each organisation is different. From here it really needs to be an individual conversation. Feel free to reach out to us to understand this model and how it might apply to your business.

How can you get started on your Data & AI Journey?

There are many ways to get started on your modern data & AI journey, and it all depends on your current position.

If you see a need to run a few pilots, Empired is qualified to run both the Power BI GoFast and Azure Data Services GoFast. These are Microsoft funded programs and assessed via a formal qualification process.

If you need some assessment of your current-state, Empired has a Data Detective consulting package that could help.

If you need to get back to basics and understand your users and their needs, we would suggest engaging our Digital Advisory for CX and UX design, followed by architecture and roadmapping.

In any case, your first step is contacting us.

This blog is part of the #datareimagine series. For more experts' insights, clients' experiences and to download our datasheets, click the banner.

For more experts' insights, clients' experience and to download our datasheets, click the banner #datareimagine

Posted by: Brock Sperryn, Client Executive | 27 November 2019

Tags: Business Intelligence, Data Analytics, Data Insights, #datareimagine

Top Rated Posts

Blog archive

Stay up to date with all insights from the Empired blog