data intelligence for private equity

TiG and ThirdSpace Join Forces

Darren Howells

To Become the UK’s Leading Advanced Digital MSP

We’re delighted to announce that we have added ThirdSpace Ltd to the TiG Data Intelligence family. TiG are determined to help more customers thrive by through technology that drives real business results. ThirdSpace will play a pivotal role in helping to further secure our customers by providing the control to liberate with leading identity and security solutions from Microsoft.

ThirdSpace has demonstrated clear market dominance when it comes to implementing high-quality Identity & Access Management (IAM), Customer Identity & Access Management (CIAM) and Mobility & Security (M&S) solutions. They have also developed a very strong relationship with Microsoft, the leading provider of these technologies, and has won several prestigious industry awards.

ThirdSpace’s position as Microsoft’s go to partner for Identity and Security will increase TiG’s cyber security expertise, with a specialism that complements our existing cloud-based services. This union creates a powerful joint proposition that delivers a fully comprehensive suite of services for secure digital transformation.

We have long recognised an increasing demand from our clients for security services, which we can now deliver via a proven, experienced and trusted brand and team.

The group now boasts over 210 employees providing advanced Azure transformational solutions, data analytics, 24/7 managed services, as well as the latest Sentinel and SOC solutions to identify and respond to critical security threats.

To drive the company’s ambitions as the UK’s leading Advanced Digital MSP, ThirdSpace’s CEO, Neil Coughlan, will join the TiG board as Chief Strategy Officer and Sales Director, Nick Lamidey, will join the board as Chief Sales Officer.

Neil Coughlan states “This acquisition builds on a long-standing relationship with TiG – a team that we know and trust, and an organisation that retains the same culture of supporting and developing great people to enable success. We are delighted to be realising one of ThirdSpace’s strategic goals in expanding our security capabilities with a full Managed Service Cloud Platform and offering.”

Des Lekerman, CEO, TiG states, “Over the years and with a number of strategic acquisitions, we have built a great business that we are extremely proud of. This acquisition is transformational as we can now provide a deeper and broader set of services to our clients. There is huge demand in the market for an advanced digital MSP with a customer centric flexible approach. The combined suite of services is a key differentiator in the market and a fantastic opportunity for all our people.’

The acquisition has been completed with the financial backing of minority investor BGF, that will continue to support TiG’s ambitious plans for expansion and market penetration going forward. It is the third acquisition that TiG has made since BGF’s investment.

Subscribe to our latest insights

Enabling specialist UK businesses to unleash their true potential.

Get in touch

building a data & analytics roadmap

Building a data & analytics roadmap which delivers business value

Darren Howells

It’s not uncommon for data and analytics to get a first foothold through individual teams or siloed projects

However, having multiple ad hoc approaches across an organisation can quickly become a headache and certainly doesn’t yield the benefits of a cohesive approach.

If you’ve decided it’s time to unify your existing data and analytics operations, or if you’re starting the journey from scratch, there is one element that should not be overlooked: the roadmap. In this article we’ll take a look at the elements needed for building a data & analytics roadmap.

So, what exactly is a data and analytics roadmap and why do you need one?

A data and analytics roadmap is the functional blueprint which gets you from wherever you are, to the end goal. It translates the ambitions of your business strategy into a long-term action plan; driven by timelines, deadlines, milestones, and key metrics.

On a practical level, the benefits of having such a plan are fairly obvious; executing a data and analytics strategy is complex, and a clear view of the ‘what, when, and how’ keeps things coordinated. Without it, miscommunication, wasted resources and unmet goals are all too likely.

Your data and analytics roadmap not only keeps everyone on the same page regarding the practical steps; it also helps to promote a shared vision of the strategy among sponsors, team members and customers. A roadmap reduces uncertainty and helps to manage change by articulating exactly what is being done, when changes will occur, and the value that will ultimately be delivered to the business as a result.

So, let’s take a look at some key elements when building a D&A roadmap.

The essential stages of building a Data & Analytics Roadmap
1. Alignment to business strategy – identify and select your key performance areas

The first and most fundamental step when beginning this process is to outline the key performance areas (KPAs) that your roadmap will use. The areas that you identify should be contextually specific to your organisation and must always be tightly aligned to the business strategy.

Keep in mind that KPAs should be business objectives and not data and analytics goals (e.g., developing a data warehouse). Look to high-level categories, such as ‘customer base’ or ‘product development’ and let these provide the basis for key performance questions – such as, what opportunities are there for improving customer margin? What improvements are needed in order to reduce development costs?

Clarifying your business strategy allows for the final question – how can data and analytics help to achieve these goals? Your roadmap lays out these data and analytics objectives and plots a course reach them.

2. Ownership

Ownership of the data and analytics strategy and the subsequent plan needs to be understood. It needs to be owned by specific individuals, who are on the hook to deliver it. And they must have the support and sponsorship to make it happen, to ensure that key milestones are successfully hit. A data and analytics roadmap is a living plan, and it is essential that someone is responsible for monitoring progress, reviewing and reordering priorities as necessary, and communicating this to the wider team.

3. Select the right technology

Next, it’s vital to ensure that you have the right tools to meet the short, medium and long-term stages of your roadmap. Depending on your objectives these will likely relate to:

    • Data Management
    • Data Visualisation
    • Data Science

When it comes to technology there is often no need to invest heavily at the very start of the journey. It’s perfectly valid to start with small investments and prove the business value before opting to expand.

4. Culture and Skills

Platforms or technologies, especially ones that require new competencies, cannot be effectively deployed overnight. Your team’s data culture, skill sets, and levels of data literacy should all be considered in the earliest stages. A data and analytics roadmap should not only consider which tools are right for the size and shape of your organisation, but should also logically sequence any training or hiring.

5. Set up Processes to ensure Standards and Governance

A data and analytics strategy may introduce a number of new working methods and technologies into your organisation. It’s best to implement processes for standards and governance early on in your roadmap; this promotes best practices and data security across teams.

While building a roadmap can seem complicated at first glance, taking the time to consider each of these five essential stages will help to build a robust plan. Once the project begins, a well-planned roadmap will help keep things on track, provide clear metrics and milestones, and ultimately save you time and resources.

This article is in partnership with TrueCue as part of our Data-Driven SMB series. For more information, advice and resources on how to accelerate your organisation’s data and analytics maturity, click here.

Enabling specialist UK businesses to unleash their true potential.

Get in touch

Richard Bradley

Richard Bradley

Darren Howells

Richard Bradley joined TiG in 2021 as Chief Financial Officer, bringing with him a wealth of experience that will support TiG on its acquisition journey.

Richard has over 20 years’ experience in Corporate Finance Advisory and CFO experience in Private Equity backed B2B businesses. His previous roles were as CFO and COO for EPI Group, a B2B Professional Services business which he led through a PE-backed MBO; and Arrow Business Communications, a Private Equity backed B2B Telecoms, IT and energy services reseller.

As a commercially minded and strategic CFO Richard is experienced in executing and integrating complementary bolt-on acquisitions. He enjoys working in collaborative, entrepreneurial businesses, making him a perfect fit for TiG Data Intelligence.

“I am delighted to be joining such an impressive team and prominent leader in the Cloud Services market, and look forward to adding value by supporting the strong organic growth trajectory complemented by strategic acquisitions.”

Richard Bradley TiG

Richard Bradley

Team member's recent articles

Nothing found.


serverless computing Q&A

Serverless Computing Q&A with Mitesh Desai

Darren Howells

In this Q&A session between TiG’s Technical Director, Mitesh Desai, and COO George Georgiou, we’ve covered some frequently asked questions about Serverless Computing. If you’d like to find out more about how moving to a Container model could reduce your business’ spend on cloud computing, take a look at our Cloud Pleaser page or Contact us.

What is Serverless Computing?

Serverless computing is an architecture where code execution is fully managed by a cloud provider, instead of the traditional method of developing applications and deploying them on servers.

There are multiple benefits to Serverless computing. It takes away the burden of infrastructure management and admin, providing a scalable platform so users can deploy code quickly and efficiently. The increased efficiency means organisations can save money and reallocate resources to accelerate the pace of innovation.

How does Serverless computing save money for businesses operating in the cloud? Could you give an example?

From a commercial perspective, it’s far cheaper to run containers and serverless objects in any cloud platform, especially Azure, than it is to run virtual machines. So just as an example, a typical virtual machine that could be anywhere up to £300 could get replaced with a container that starts at £60-£70. The big cost difference there is based on the resources that you’re using in the cloud environment just to run that particular function or that code, whereas with a virtual machine, you always have resources that need to be allocated and turned on for the virtual machine itself to be present.

On average, we see around a 40 percent decrease in costs by moving from traditional virtual machines of different sizes. So this could be machines that are doing basic Web application type roles, to complex database roles. If you take a mixture of these two machines and move them into containers, plus some platform-as-a-service like SQL containers, on average, we see at least a 40 percent reduction in the costs.

OK, so that's considerable, is it possible to save anything around licensing as well?

Depending on where how far you want to take your Serverless journey, it will give you a lot more cost savings. You could go to functions, which are pure containers that just run the framework for the code. So for example, if you’re looking at a Java website or application, those will completely reduce any operating costs because there’s no Windows or Linux costs involved. If you go to Kubernetes and Docker containers, you would look at going to either a free Linux version which again will reduce your cost or a RedHat version and potentially a Windows Docker image as well, all of those there are reduced costs compared to maintaining and running a full-blown operating system on a virtual machine.

Are there any situations where it wouldn't make sense for me to move?

Sometimes it depends on what type of workload you’re looking to move. At TiG we do a lot of pre-sales and workshops prior to looking at a Serverless environment. In some cases there are going to be legacy applications or applications that have been developed that do rely on a particular function, that a virtual machine or a full-blown operating system provides. This would mean that you can’t lift and shift the application into a Serverless environment. However, that’s where we can look at refactoring the application and modernising it to try to make it work in a Serverless environment.

If my business has automated configuration, will I have to re-do all that work? What's the cost of moving to Serverless competing in approximate days if I have an environment, say, of 100 servers?

So typically we use Azure dev-ops. Most cloud platforms have a dev-ops program behind the transformation to this Serverless world, so that helps accelerate the journey and then helps you to scale going forward as well. So we look at typically anywhere up to 45 days to do a transformation of that type of workload into containers, whether they’re docker images, whether they’re Kubernetes or full Serverless in terms of functions and event groups.

It’s all driven through Azure dev-ops or another dev-ops program. So going forward to scale that environment or to shrink the environment down is fully automated and controlled through pipelines.

From a support perspective, how can I take advantage of new ways of supporting my environment?

Going forward, you’ll be using a lightweight O/S and Docker based containers and frameworks. That will help reduce your patching cycles, there will be less frequent patching of operating systems, less vulnerabilities to manage. Also just the general maintenance and management of the operating system will be easier. So from a technical perspective, you get those advantages.

From a commercial perspective, again, we’re seeing where we’ve done these transformations, there’s also a decrease in the managed service costs because there’s less patching for us to do as an MSP, there’s less maintenance work to do on servers. So that means further savings.

Are you seeing many organisations considering moving to Serverless computing now? It reminds me of the days when we started talking about going to virtual machines in Azure and everyone was quite skeptical. What's the feeling you're getting from the clients that you engage with?

I think it’s fairly opposite to that sort of scenario now, because I think a lot of the clients are realising that if they modernise the application, they’re not going to just benefit from re-platforming. So i.e. moving it from Virtual Machines to Serverless, but they’re taking advantage of scalabilities and actually addressing issues and problems that they have currently with their applications. Whether it’s a scale for performance or for more clients. So we’ve seen a lot of clients sort of say, well, we don’t want to move to this environment just because it’s cheap, or because it’s something new out there

A major reason to move is to enhance that resistance capability. So whether it’s to create lots of small environments for different clients to use their products, or whether it’s to shrink and rapidly increase that environment based on demand. So we’re getting a lot of traction from that, rather than a ‘let’s move to a different way of hosting our application’, because what we’re really saying is let’s modernise the application so it can fit with the ongoing business requirements.

Who is the market leader from a public cloud perspective? Because obviously we still have the the big players in that space, so Microsoft, Google, IBM, VMware through IBM. And I've noticed that we've got organizations like Rackspace offering their own flavor of Serverless compute.

So there are some strategic market players like Rackspace who are starting to offer their own container dockerised options. A lot of those are focusing around services that they already provide. Rackspace provides a lot of hosting already, and they’re looking at dockerising and containing their own hosting platform for Websites and for services.

You’ve got AWS, which has been in this space since you could probably say day one, because a lot of their compute and resources is based on containerisation and spot type resourcing. So AWS doesn’t have a native operating system and relies on Linux heavily in open source. So they’ve been in this space for a number of years.

But recently we’ve seen Microsoft make a big play into this market. A couple of the features and the functionalities that they’ve added, especially along the API management aspect have made a lot of positive progress. These allow businesses to actually take advantage of modernising applications by offering the usefulness of the application through APIs, and this has made Microsoft more of a market leader than the others. They’ve taken that step forward to drop the cost of utilising Serverless environments, but also provided a gateway for any type of business, whether it’s a small law firm to a big corporate insurance firm to take advantage of them. Whereas the AWS and the Google market still plays very much strategically with the bigger players, because there’s a lot that needs to be configured and put in place, whereas the Microsoft market is very much open to small companies as well as large ones.

TiG offer a free workshop in order to assess your current application stack and see what benefits we can offer in order for you to start this journey. Find out more on our Cloud Pleaser page.

Enabling specialist UK businesses to unleash their true potential.

Get in touch

SD-WAN Q&A

SD-WAN Q&A with Jacques Fourie

Darren Howells

In this Q&A session between TiG’s Director of Managed Services, Jacques Fourie, and COO George Georgiou, we’ve covered some frequently asked questions about SD-WAN. Watch the video or read the questions and answers below to learn more about this technology and the way it could work for your business. Then find out more about TiG’s packaged security bundle – Guardian. 

Today we're focusing on SD-WAN, in particular around networking. We've heard the term SD-WAN used a lot recently in a lot of articles and in customer interaction. My first question to you is, what is SD-WAN and how does it differ from the type of networking we're used to, such as MPLS or direct lines into data centers for connectivity?

I think we need to look at it in two ways. So first of all, it’s the way we deploy networks. So we’re looking at links into public cloud data centers like Azure. And when we’re coming into those type of platforms, we’re no longer coming into hardware based appliances, traditional firewalls and reaching infrastructures. If we go back down to the sites and the office layer, yes, we have these appliances, but the deployment is happening on the software layer. So it allows for speed of deployment as well.

Next, if we look at the landscape, people are consuming a lot more resources from SaaS based platforms like Office 365, and applications in the cloud. So a centralised network using traditional MPLS, VPLS, lease lines, becomes less effective because your resources are more web-based. So what we tend to do there is leverage standard internet connections using SD-WAN type of deployments to come back into the data center for the essential workloads.

I've heard of solutions such as VPLS. What's the difference between VPLS and SD-WAN? Because I understand the VPLS solution is also using Internet lines.

VPLS would traditionally use a centralised network into data centers, consuming resources in central data centers. You might have a centralised edge there with the Internet hanging off the center of the data center, which is all good. But the thing is with that, there’s a lot of infrastructure behind that. It can become expensive when they’re global. And if you have to split those entry points into different vendors, they can become a nightmare to manage as well. If you try and centralise the vendor, not a lot of vendors provide global VPLS, and if they do, that can be expensive in some countries

I think decentralising the network and making use of Internet lines to provide WAN services over them – obviously secured with SD-WAN and a VPN type of encryption – basically brings a lot more flexibility to your WAN connectivity and then allows you to maximise the investment of your local circuit investment. So if you’ve got Internet on the site, you can use it for your WAN deployments to use stuff like the Microsoft Azure Virtual WAN, or SD-WAN deployments into other fabrics, but also still leverage that fast onsite Internet for various SaaS services that aren’t WAN related.

You've used the words Internet lines quite a bit there. With the recent increase in homeworking we've been hearing about breaches constantly. We heard about Solar Winds a few weeks ago and currently we've got Sonos who've been breached. Does using SD-WAN solutions make us less secure?

No, it doesn’t, because ultimately we use high-end encryption, like you would over VPNs to secure the WAN over Internet lines. Decentralising the network brings challenges, yes, it’s obviously easier to keep people secure if they are in one building on a LAN, it’s a lot more controlled, but that’s just not the way we work anymore. A lot of the software that we use is designed for mobility, is designed to work from home, and you want to give your staff the ability to work anywhere but work securely.

So when we talk about SD-WAN in branch office, we’re still able to secure it the same way we would be able to secure VPLS and those type of connections, if anything, we might have more control over those because we control the endpoints. From a security perspective, this brings us back to the core principles around the Microsoft stack, which is identity, endpoint, securing the device, securing the person’s identity and making sure that it doesn’t matter where they are – If they’re sitting behind a corporate firewall, if theyre sitting at home behind a home router, we are still able to leverage the same amount of security there on their device and secure the network.

Let's look at it from a cost perspective. I think it'd be interesting to understand the difference between the cost of an MPLS style solution with the cost of the lines vs. SD-WAN with Internet lines.

If you take a traditional architecture, say there’s an office in the US, an office in Asia, an office in EMEA. You would maybe have some data centers, some workloads and then you would have to connect them all up on a VPLS to provide that low latency type of centralised network for people to be able to share and access resources. That could be quite costly because first of all, you need to either split your providers, which adds management overheads to match the requirements of each country. If you’re lucky enough to have a global provider VPLS, your management overheads are a bit lower, but still, the server costs are astronomically high and you’re still going to run into issues with localised deployments and management issues between the various different countries

So what SD-WAN brings is it just allows us to utilise simple Internet lines, from from good providers in those local areas. We then leverage the Microsoft Azure fabric, which is a 40 gigabit per second backbone. Rapid deployment using SDN type of appliances like Barracuda, to get fast deployment into the WAN fabric, and then you leverage Microsoft’s Global Network to provide that global connectivity to workloads that might exist in different countries for various compliance reasons for people to access. So the speed of deployment, the flexibility there is key to those type of deployments. SD-WAN allows us to to run those kind of virtual WAN services over Internet lines and not compromise on security.

So I guess for clients with an international requirement, taking the SD-WAN route would be a lot cheaper, because MPLS circuits are expensive on a global basis?

Yes, 100 percent. If you look at your circuit requirements, and you’ve got an SD-WAN fabric to back you up, your actual local circuit requirements become just Internet. And Internet locally in any country is a lot easier to get than VPLS into a into a global network fabric

So if we’re looking at a decent internet provider in each region and then using SD-WAN over the top of that to provide secure WAN services into workloads. Then leveraging backbone fabrics like the Azure virtual WAN to speed those deployments, and then that means low latency access from anywhere in the world onto a really cost effective WAN solution, just by having good Internet on the site and the right type of appliances on the site. So your deployment costs and your speed of deployment become a lot lower. And also the management overhead is a lot lower as well.

So moving on to management overhead. What are the key differences between managing a traditional MPLS or a bespoke type environment versus SD-WAN from a cost of management and monitoring perspective?

I think we need to look at the complexity of VPLS deployments. Unfortunately, with a lot of network deployments, there’s a lot of overengineering out there. But if we just keep it simple, you’ve got routers to manage the WAN portion of your of your network, so there’s circuits and provider level things to manage there, as well as any Internet portion of your network. So they’re not unified.

The second thing is, you know, they go down. You’ve got to have failover lines or VPNs, it can become quite messy to keep the uptime of the network as well. For a low deployment overhead SD-WAN really ticks the boxes because we use appliances from partners like Barracuda, like Fortinet, like Cisco to be able to stand up sites very quickly to the same security standards at the site. So that means that branch network can be stood up very fast and brought onto the SD-WAN and then managed centrally. So from a network engineer’s point of view, it’s a low touch deployment after that. Aside from your standard maintenance.

Enabling specialist UK businesses to unleash their true potential.

Get in touch

managed service provider of the year

Best Managed Service Provider - HFM Awards

Darren Howells

Hedge Fund Managers European Services Awards 2021

TiG Data Intelligence have been announced as finalists at this year’s HFM European Services Awards, hosted by HFM Global. We have been shortlisted in the category for ‘Best Managed Service Provider’, for the work we do to support our Hedge Fund clients.

The rigorous judging process, based on the views of a panel of leading hedge fund COOs, CFOs, CCOs, GCs and CTOs, ensures that the HFM European Services Awards recognise those providers driving up service standards across the sector. This year’s awards have been particularly focused on client testimonials, and only those MSPs who were able to provide glowing feedback gained a place on the shortlist.

Entries were judged on: Commercial success, Innovation, Service delivery and client retention, Business diversity and inclusion, and Positive client feedback via submitted testimonials. These criteria were judged with particular reference to the challenge presented by the Covid-19 crisis.

Having made the shortlist, TiG now progress to the second stage of judging, to include a 10minute video conference interview on our entry. We are delighted to have been shortlisted as one of the best managed service providers for the Hedge fund sector and look forward to the second stage of judging and the awards ceremony to follow.

To find out more about the work TiG Data Intelligence does to support the Hedge Fund Sector, take a look at our Alternative Investment page or contact us.

Subscribe to our latest insights

Enabling specialist UK businesses to unleash their true potential.

Get in touch

Five career development tips

Five career development tips from TiG's technical team

Darren Howells

Our aim at TiG is to make an impact on business by making cutting edge technology accessible for all. Our team are highly skilled, not only in their areas of expertise, but also in the way they are able to communicate their technical knowledge.

As part of the TiG staff training programme we encourage our team to learn from one another, give them confidence to try out new solutions, and challenge them to improve their own performance with each project they complete. A key part of this is celebrating achievements together as a team.

Alongside technical training and certifications that can be gained from TiG’s partnership with Microsoft, staff are mentored by other team members. When asked what sets TiG apart from other places they have worked, this was a key factor for everyone we asked. Technical Consultant Sateesh Patel summed it up by saying ‘At TiG you have the opportunity to get exposed to different technologies, they’re not afraid to teach you new things. They don’t restrict your learning in terms of what you can do and what you can’t – they will guide you through the process of getting something implemented or learning about a new technology.’

Given the support and training they need, technical staff prove their commitment through completing the necessary certifications and qualifications, as well as the relationships they build with their co-workers and clients.

We asked five of our technical team what advice they’d give to people wanting to move up the career ladder at TiG Data Intelligence.

Jacques Fourie

Jacques began his career at TiG as Service Manager, progressing to Head of Service Delivery before moving to his current role as CTO. His advice is to make sure you stay up to date with technology as it progresses ‘There are new products and services coming out all the time, and a key part of our role is learning about them so can decipher them for clients.’ As a manager he also has a checklist he looks for in potential new recruits. Being committed to personal development through certification or practical skills, as well as being a team player and sharing new knowledge with others in the business are both top of Jacques’ list.

Jacques Fourie, Team TiG

Sateesh Patel

Sateesh is one of TiG’s Technical Consultants, who made the move from the service desk team having progressed from the role of Senior Support Engineer. He says the main skill he has learnt is in dealing with customers, ‘The main goal is to communicate my ideas to the customer. So whatever technology we’re using, try and explain in a simple, non-technical way, because most of the people that we deal with don’t fully understand the technology and what it does.’

His advice for others looking to become a Technical Consultant is ‘Practice the technology you want to get into, for example Microsoft offers trial accounts on Azure, or set up your own server at home. Then you can learn on that because it’s better to make mistakes on a dummy environment than one that gets taken into production. It’s better to learn on the job, doing hands-on stuff rather than just reading loads of articles.’

Ajay Sachania

Ajay joined TiG straight out of university as a Junior Project Manager, progressing to Senior Support Engineer and eventually settling in his current role as Service Manager. He credits his progression to the support her received from COO George Georgiou, ‘There’s no way I could sum up all the guidance he has given me. With his focus on development I have matured in my approach and communication in so many ways…I started at the bottom of this business and I’ve made it to the top, it’s been an incredible journey. The changes over the years have been amazing and as the business has grown I’ve grown too.’ Ajay’s advice for those looking to progress is to make sure they focus their energy and motivation to continue learning.

Ajay

Andrew Reeve

Andrew started out as a Third Line Engineer at TiG, before moving over to the professional services team as a Technical Consultant. Andrew remembers the project that brought about his career move – ‘We had a really big client that had a massive requirement to build out a website. So I was sent to the client site and we started off doing basic virtual machines in Azure. However, the client was so big that they needed to have automation so I started to implement terraform to build multiple virtual machines in Azure. We started off with a build time of about two days to build the environment, and at the end of the project, using TerraForm and Ansible to utilise automation, and we brought that down to two hours.’

Andrew Reeve

The ability to test out his technical knowledge on the job, with the guidance of more senior members of the team, meant that Andrew was able to display the skills required to earn a promotion to the professional services team.

Andrew also advised that working on ‘soft skills’ was just as important as technical knowledge, ‘You can’t just sit in a closet and work on servers. You are progressing from, for example, a data center role to dealing with clients – you have to go on calls with clients, understand their requirements, be on call and offer a customer care service during the project.’

Mo Rashid

Mo was working for netConsult when it was acquired by TiG in 2019. At that point he was working in the infrastructure team and was identified as having the potential for becoming a Technical Consultant. Mo’s advice is ‘Soak in the information, soak up as much as you can. You can learn lots from other people here, and you need to continue learning. Your colleagues will assist you – it’s a team environment, make the most of it.’

TiG is going through a period of growth, so if you think you have what it takes to make a valuable contribution to our team on your career journey, please get in touch.

Enabling specialist UK businesses to unleash their true potential.

Get in touch

data warehouse q&A with mitesh

Data Warehouse Q&A with Mitesh Desai

Darren Howells

In this Q&A session between TiG’s Technical Director, Mitesh Desai, and COO George Georgiou, we’ve covered some frequently asked questions about Data Warehousing. Watch the video or read the questions and answers below to learn more about this technology and the way it could work for your business. Then find out more about TiG’s quick-start data warehouse solution – Octopus

We hear the term data warehouse a lot, and it means a lot of things to a lot of people. Could you give me a definition of what a data warehouse is? In addition, we hear the terms 'traditional data warehouse' and 'data warehouse in the cloud'. Can you tell me what the differences are?

From our perspective, data warehouses are the means to be able to collect data from different data sources, so structured data sources or unstructured data sources. From that we can create a single layer of data that becomes your sort of master copy, and then the rest of the business can tap into that data

So whether it’s to create some visualizations or just get some actionable insights from that data to make some operational changes or strategic changes within the business, that’s where the modern data warehouse is coming in. The infrastructure side of of putting a data warehouse together with current cloud technologies is much faster. You can start using one of these cloud based modern data warehouses within five days and have data coming in ingesting, transforming and actually producing some results. Whereas, with traditional data warehouses you would have to build the infrastructure – servers, your data management, all of the services that are now available out of the box that just tap into your current data.

A common concern I hear is that the cloud and especially the process of going to that cloud based data warehouse is very expensive. What would the cost comparisons be for on-prem versus the cloud version?

It’s actually the opposite. We’ve seen traditional data warehouses that take a long time to put together and maintain and manage – and also the costs based on licenses and especially enterprise software licenses can be huge.

I guess the biggest difference is, you can turn on and off technology that you don’t need in the cloud. You can turn off the data warehouse when it’s not actually providing you any valuable insights. And everything that you’re running in that data warehouse is based on a consumption model

So if data doesn’t need to be adjusted or transformed or the data sources are only coming in once a day, that’s all we’re actually running. So the costs are very small. One example is where we’re seeing a data warehouse that’s taking over twenty five million feeds, which is a vast amount on a daily basis, yet it still costs less than the traditional data warehouse.

OK, so moving on from the definitions. What would be a typical use case, because I guess when we talk about data warehouses, it's normally within the affordability range of enterprises or very large mid-sized businesses. What is it that brings cloud-based data warehouses within the reach of the SME?

I think a number of things, especially now, it’s the technology that’s available out of the cloud platform itself, so without having to engage data scientists, data experts, SQL experts, there’s a lot that’s already available in the actual cloud platform. So small to medium sized businesses can take advantage of that with code-free setups and then use partners like us to accelerate that process when necessary. But the vast majority of putting a data center together is totally invisible, and a lot of the concepts are simplified in the cloud.

That means that you can, as a small business, start taking advantage of creating visualizations and getting some insights into your business via a quick method. One of the drawbacks of a traditional data warehouse is that if a new data feed comes in or a column changes in Excel, or the data structure changes, you’re looking at reengaging your data team, which is comprised of three or four people. In a cloud based solution it just learns that there’s a new problem or different type of data and starts matching it together. It then gives you suggestions on how to incorporate that data.

I want to hone in on some examples. Let's say I operate a financial services organization, I do a lot through spreadsheets. Traditionally, we've not really been in the position to build a large scale data warehouse environment. We have a lot of spreadsheets, and we take all our feeds from the core applications that we use. Extracts as CSV files in most cases. Is that a suitable source for a data warehouse?

Yes – sources could be unstructured data or structured data, so they could come directly from applications, from Excel spreadsheets, from SharePoint, from live streaming data. So whether it’s feeds from Twitter or from your own feedback forms, whatever the structure, the data can come from a vast number of places. The data warehouse collects that data and creates a uniform timestamp and then stores it within the database. So you can then tap into that on whatever device you happen to be using, and it could even lead directly from the business’ social channels, CRM or sales force. And so you have the ability to replace these time consuming processes of creating spreadsheets and extracts from applications, then merging them together to create the Excel report. It can all be done with the data injection process of the data warehouse.

OK, so I have the spreadsheets and historically, if want to go into an analysis I have to open my current spreadsheet alongside all the historic sheets. Will I still have to do that? And will any of my historic data get overwritten?

In the data warehouse, there’s multiple versions and copies kept. So the structure of the data warehouse is very similar to operating a word document in SharePoint where you have multiple versions. You also can create historical points in the data so you can go back in time as well. Most of the Azure SQL databases start with a terabyte of data and then the cost of the data stored in cloud platforms is relatively cheap. So to get two, three, four hundred terabytes of data is still possible within the data warehouse without it costing the same as it would if you installed that on a traditional data warehouse.

One of the things I have to do manually at the moment is when I add new data, sometimes I have to change the format. For example if I want dates in UK date format rather than US, or if I'm bringing in phone number data and I want to remove the dial assist numbers to keep everything uniform. Do have to do that in my spreadsheets before I bring it into the data warehouse?

One of the benefits of the modern data warehouse is the data preparation option. We can use data analytics and machine learning. We tend to make it more prep and train of the data rather than the machine. So what we mean by that is that the data will come in and the machine will start learning what that data structure should look like. So, for example, if the date format should be UK, it will start rearranging that data, making sure you can format if it’s missing data, changing dial assist codes.

You said you can actually prep and train, so does that mean we are using AI to clean the data? And what else is possible with that?

In the Azure world there’s a concept called Azure databricks, which is an open source language that helps you to code in machine learning. It gives you three options. You can use Scholar, Python and other appropriate language sources to be able to code in machine learning format. The area that we specialize in is Scholar coding, which allows you to program, to train the data, to say if this should have a UK date format, then this is an example of the UK date

It will start learning that and then go through your data sets and replacing where it sees any other data format. This can be expanded to many other uses around machine learning. So just this is changing dates and formats, but you can actually go away and grab information that will help you validate that data. Going back to your example of a financial organization, if you want to validate any data that you’re collecting, or getting as part of an application form or a website based form, you can then get the database to search out data from companies house or another reputable website, compare it with what’s being put in, and then flag up any exceptions where the data is either incorrect or misleading. So the data preparation can go from just updating formats, to getting the data in the right areas, to actually validating the data as well.

I've now got all my data in the data warehouse from external sources, I also run a CRM dynamics and I've got a finance system, a business system, a practice management system that have APIs. Can I now build dashboards in a visualization tool to bring everything all together or I just limited to the data in the data warehouse?

The data that sits in data warehouse can be exposed using API management. So there’s two key benefits of doing that. You can directly give access to other businesses, to your data securely through an API. So the days of FTPing or providing these over secure email or getting access to the data by sharing has changed dramatically. When you get to this model, you can securely share an API and then allow businesses to connect directly to the data that you’re servicing

And also you can connect directly to the same data by using visualization tools such as PowerBI and Tabular, and also create dashboards that you can then surface around the business and put back on the websites as well. The key piece of the data warehouse is to get that single layer of data, as we mentioned earlier, on the board. So when you get to this side of the journey, you can then start using multiple tools to be able to distribute that data.

One final open question - What would be your top tip for anyone considering moving to a data warehouse?

I think it’s key to make sure that you concentrate on where your data sources are coming from, to pick sources that are going to give you valuable insights. Although we said that the consumption and the running of our data warehouse can be scaled to large numbers, you’ve also got to work out what data is valuable and bring that data so you can create a nice uniform layer of data that the whole business can use

As an example of that, we’ve had a couple of recent engagements with customers where they’ve said they’ve got terabytes of data that they’ve been collecting and they feel it’s right for them now to look at data warehouse solution in the cloud so they can add some intelligence to the data preparation. And when we actually look at that data, it is not the right data that actually is going to add any value to the business. Data that they could have collected, such as conversations or training material could have added a lot more value for what they do. Rather than collecting data that is only going to be used once. And so a good tip would be before you embark on this journey and start ingesting data from multiple sources, have a look at what the data is going to provide value for in terms of what you need to deliver for the business.

Enabling specialist UK businesses to unleash their true potential.

Get in touch

tig welcomes new team members

TiG welcomes new team members

Darren Howells

Team TiG expands despite challenging year

It has been a busy year for TiG Data Intelligence. The acquisition of MMR IT early in the year meant we welcomed 39 new team members, and it hasn’t stopped there. Despite the circumstances that surrounded us during 2020 we’re proud that we were able to extend our team further by welcoming 21 new recruits.

New team members have spanned every department of the business, from Accounts to Infrastructure and everything in between. For some, joining in 2020 has meant that they have completed their entire induction virtually – meeting their colleagues and clients through Microsoft Teams.

We caught up with some of our new team members to ask how they’ve found the experience so far, here’s what they had to say.

tig-new-joiner-steven
infrastructure engineer
recruitment at tig
new sales recruit
great year for tig
join tig in 2020

After a year of working together virtually, we’re looking forward to meeting our new team members in 3D as soon as circumstances allow!

The support our team receives goes well beyond their first months on the job – all TiG employees are supported to continue training and pursue their career goals. Here’s a message sent to CEO Des Lekerman recently – ‘I am so glad and very proud that I work for TiG… it’s the best company with best top dogs in all the companies I have worked for. A company that employs people that can be approached no matter their position and they listen! Thanks for having me as a staff member man! I feel like part of a team rather than a number man’ – Kiran Varsani, Senior Support Engineer.

If you would like to find out further information about joining Team TiG take a look at our careers page.

If you would like to learn more about the technology we use to work and collaborate remotely take a look at the following articles:

Four steps to follow when your teams start working from home(Opens in a new browser tab)

The virtual space for modern working and team collaboration(Opens in a new browser tab)

Microsoft Teams(Opens in a new browser tab)

Subscribe to our latest insights

Enabling specialist UK businesses to unleash their true potential.

Get in touch

TiG are ITEuropa awards finalists

Comms Business Awards Finalists

Darren Howells

TiG Data Intelligence have been shortlisted as finalists at this year’s Comms Business Awards.

These prestigious industry awards are in their 16th year and aim to reward forward-thinking and progressive businesses in the telecoms industry. The mission of these awards is to shine a light on genuine talent and innovation and to provide a springboard from which companies can flourish and grow.

TiG are pleased to announce we have been shortlisted in the category for ‘Best mid-market IT solution’ for our M20:20 solution. M20:20 simplifies the process of migrating to Microsoft 365, meaning businesses can make the move in just 20 days for £20k.

This award category is dedicated to resellers, VARs, dealers, or any other kind of Channel Partner which sell solutions directly to businesses. This category is for those selling a particular solution which is suitable for businesses in the mid-market sector for between 250 –  500 users.

We are delighted to be Comms Business Awards finalists, particularly after an extremely busy and challenging year. We look forward to the virtual awards ceremony which will take place on Tuesday 19th January 2021.

To find out more about the M20:20 solution for which we have been shortlisted, take a look at the M365 gateway solution page or sign up for our free on-demand webinars.

comms business awards finalists

Subscribe to our latest insights

Enabling specialist UK businesses to unleash their true potential.

Get in touch