Blogs

The criticality of applications

Alex D’Anna –

More than 55 million Africans rely on small apps on their mobile phones to transfer money. In fact, mobile money accounts outnumber bank accounts in Kenya, Tanzania, Uganda, and Madagascar. Astonishingly, according to a new study by telecoms trade group GSMA, in June 2012 alone, the value of Kenya’s mobile money transactions equaled 60 percent of the country’s gross domestic product. While some of this is due to geography and parts of the population not having easy access to banks, mistrust of local currencies and governments are also behind this wave of mobile banking: the ability to bank without any involvement from local institutions is seen as a real bonus.

As you can imagine, this places a massive responsibility on telecommunications providers as well as the companies providing the apps: if transactions slow down or stop then people in Mogadishu for example, can’t buy dinner. Keeping telecommunications services and associated apps services performing 24/7 in some cases is critical for survival.

To prevent interruptions to service, most companies that provide apps for online banking work to Service Level Agreements (SLAs) that require them to keep their IT infrastructure up and running at all times.

One of Virtual Instruments larger customers, like many other enterprises today, uses VirtualWisdom to help them deliver this continuous service and can measure latency in millions of dollars for its online payment system. It has found that if the application slows down transactions are lost and users move away to a more reliable alternative pretty much straight away.

So how can you ensure application performance? Here’s a checklist of recommendations that can help:

  1. End dependency on device views. Most organisations try to manage their infrastructures using the tools provided by their virtual server, fabric and array provider. The problem with such tools is that they do not provide information in real-time because if they did it would result in latency as they polled the devices too often. This means that the reports generated are based on an average over time from information gathered usually at one to five-minute intervals. The other issue with device dependency is that the vendors operating the virtual system can take weeks to report on service levels as getting logs from each device can take days.
  2. Stop treating all applications equally. A database running the business should be given more efficient resources than less critical applications. But, as you can’t see their utilisation and performance in isolation, everything gets upgraded and overprovisioned, effectively wasting budget and capacity.
  3. Eradicate storage array network noise. All storage array networks suffer from minor issues at any one time. Yet, although small, these glitches can have a significant impact on the network. An outage or serious latency is rarely caused by a single failure; most of the time it’s down to a collection of issues accumulating into one large problem. By looking for and eradicating these small symptoms of trouble, such as loss of signal and loss of sync, the storage system as a whole will function much more smoothly.
  4. The storage and application team must work together.  Different people need different views of what is going on in the infrastructure. But what is happening currently is that instead of them having different views of the same data they have different views of their own data. This is a historical issue which creates challenges between what certain people “see” in the storage network. This is limiting as it does not provide the entire team with visibility into the bigger picture. The database administrator, for example, might experience performance problems, but the tools used to assess database performance are quite different and often unfamiliar to the team that gets data from the storage. A common language is key in this regard. The storage team also doesn’t know what the end user experience is because that’s the job of the application guy, and vice versa. If they all work with one end-to-end, real-time view of the infrastructure then every team and the business itself will immediately see extensive benefits.

Finally, by introducing an Infrastructure Performance Management policy to maximise the resources for critical applications, operations can become faster and more efficient. In addition an Infrastructure Performance Management strategy will not only lead to savings in capital expenditure by reducing over provisioning and operating expenditure but also by removing performance problems early on and therefore cutting latency and downtime.

Log in or register to post comments