Analyst Report | IDC: A Blueprint for Modern Monitoring
What are NAS and SAN?
NAS (network-attached storage) and SAN (storage area network) are the two primary forms of on-premises networked storage. The names sound quite similar, which often leads to confusion, but they are separate technologies.
NAS refers to a hardware device that is connected to an enterprise’s local network. A NAS device typically works over ethernet or a similar wired connection and is designed to be simple to set up and cost effective. If a business runs low on storage space, it can obtain a NAS device and plug it into the network so it becomes available to everyone. NAS devices can contain multiple drive bays that include failover technology like mirroring or RAID, although some NAS devices are simple enough so they can be repurposed for home use.
Rather than representing a single device, a SAN is a network of storage devices. These devices connect to a dedicated network off the ethernet LAN, generally using the storage-centric fiber channel technology to transfer data to client computers. Because SAN is more expensive and complex than NAS, it is often reserved for applications where low latency — and zero downtime — are critical. Video editing and surveillance video recording are common applications for SAN, demanding high throughput and low latency for the massive amount of data being transferred. Because this data is transferred on its own private network, SAN is able to avoid congestion on the LAN and maintain a sustained, high rate of data transfer.
In summary, SAN is much faster, highly scalable and designed for high-end operations, but it is more costly and requires significant expertise to manage along with a private fiber channel network. NAS on the other hand is an inexpensive, simple technology that can easily be set up with a laptop that operates over your existing LAN.
What is cloud storage monitoring?
Cloud storage monitoring, such as AWS storage monitoring, is the process of observing, reviewing and managing storage systems within cloud infrastructure, generally implemented through automated monitoring software that provides access and control over the entirety of the cloud infrastructure from one centralized location. Automated performance monitoring techniques can gauge availability and analyze performance, tracked by metrics that can include the number and type of users, database performance, server response times, resource levels, system and process performance, as well as security issues and other disruptions. Administrators can review the operational status and health of cloud servers and components and look for system disruptions or suspicious activities. The ability to continuously evaluate these metrics can give organizations insight into storage system issues or vulnerabilities long before bigger and more damaging problems arise.
What should you look for when monitoring storage performance?
Storage performance comes down to answering a few key questions about how your storage devices are running, as well as network performance and other performance issues. These include:
- Are users waiting too long for data to be received or written on the storage device? (Is the SAN fast enough?)
- Is data being lost in transit due to congestion or error?
- Do storage devices have ample resources to operate without being constrained?
- If problems emerge, is it easy to discover the root cause and the proper solution?
- Is the system approaching a capacity limit?
What are key storage monitoring performance metrics to track? How can you show storage is performing well?
Storage performance uses metrics to answer the above questions. The key storage monitoring performance metrics include the following:
- Latency (read and write): Latency is a core 1 metric that measures how responsive a storage device is to requests. This is generally measured in milliseconds, tracking the time it takes for data to be read from or written to the disk.
- Throughput (read and write): Throughput is a close neighbor of latency, measuring the number of bytes read to or written from the device per second. Throughput will change based on demand for data transfers, but the goal in monitoring throughput is to ensure that a device is not regularly “maxed out” at its highest throughput rate.
- IOPS (input/output operations per second): IOPS, closely related to throughput, is a widely referenced metric that gauges how many reads and writes a device is successfully completing each second. It is the analyst’s job to verify that the actual IOPS metric is reasonably close to the specified IOPS for each device and to ensure that it is not degrading over time, which could indicate a larger problem.
- Utilization: This metric refers to the SAN’s CPU utilization, measuring how much time is being spent processing various storage requests. Utilization that climbs above roughly 50% for more than a few seconds could indicate a problem with the SAN.
- Queue length (read and write): The queue length (also called queue depth) of a storage device is the number of input and output requests pending at any given moment. Because a disk can generally handle only one operation at a time, some level of queueing is normal, although this number should always be small (under three at any given time). Queue length is also correlated with latency — high queue length but low latency means that your storage devices are successfully keeping up with high levels of demand.
- Capacity available: This is a simple metric that measures storage resources — specifically how much empty space is available on a storage device. There’s no universal guideline for an acceptable level of free, available capacity, but generally, when available capacity drops below 20%, it’s likely time to upgrade your SAN.
All of these metrics can be sampled in real time, analyzed as an average over a certain time frame (the last hour, for example) and graphed in visualizations over longer periods of time. The IT department monitors these statistics and watches for any troubling trends that might be developing. High-quality storage monitoring software as part of your regular management software suite will make this easier by using intelligence to inform the analyst as to whether something requires attention.
What are some best practices for storage monitoring?
Here are some of the most widely-cited best practices for storage monitoring in your environment:
- Understand your organization’s data usage patterns: Does your organization frequently read and write small files or does it infrequently read and write large files? Is the organization subjected to periods of burst traffic (e.g. over the holidays?) The way your organization uses data can impact the storage architecture you choose and affect your analysis of monitoring metrics.
- Use storage management tools that offer a centralized dashboard: This is especially critical for visualizing usage if your data is stored in multiple locations. If you have numerous storage products from different vendors, carefully consider your management tool’s compatibility with all of them.
- Use monitoring statistics to reconsider how storage is being used: Frequently, IT departments that roll out storage monitoring find that some systems are heavily taxed and some are barely touched. Optimizing the locations in which datasets are stored spreads workloads out among various devices, providing better overall performance.
- Ask if compression can be used to improve performance: Compression can also improve your available storage capacity.
- Don’t just analyze past performance; predict future trends: Use storage metrics to look ahead at and plan for capacity expansion down the road.
- Create a future plan for capacity expansion: Your storage management tool should also use trend data to spotlight decaying performance and predict potential device failures.
- Monitor backup devices: Backups should be monitored in the same way as primary storage devices.
What are the potential challenges of storage monitoring?
In complex environments, storage monitoring can become more challenging than simply ensuring hard drives are operating according to specifications and aren’t becoming full. Some of the possible challenges that a storage monitoring team might face include:
- Very large data stores: The IT team has to constantly manage multiple terabytes of unstructured data across a sprawling storage infrastructure, affecting bandwidth and availability. Despite these challenges, they also have to ensure reliability.
- Increased user mobility: Legions of employees working remotely during the pandemic created a new and particularly thorny challenge: How do you monitor and manage storage on users’ laptops and smartphones? Reliably available data has become an even bigger issue in light of the vast number of people who continue to access it remotely.
- The need for enhanced security: The rapid rise in the number and severity of malicious attacks has increased the need for comprehensive data security. Encryption makes data safer, but it also impacts performance, necessitating the development of new strategies to ensure responsiveness.
- The cloud: Data moving from on-premises storage devices to virtualized cloud environments masks visibility into storage systems.
How do you control data storage growth?
The following are ways you can reduce and minimize the explosive growth of storage data in your environment.
- Create flexible data retention policies: Retention policies and schedules set in accordance with your needs and priorities allow you to determine how long to hold onto information based on the type of data being stored, business and regulatory needs and other criteria. This flexibility can also extend to specific types of files (e.g. MP3s, video files, etc.), which will help companies avoid storing employees' personal music or video files when data is backed up.
- Integrate with existing platforms: Many storage providers are also integrating cloud storage tiering directly into their products, which is known as native integration. With native integration, you don’t have to purchase or maintain additional software, so there’s less infrastructure to manage. You also won’t require additional servers to support the software — everything you need comes in the operating system from the storage provider itself.
- Implement a centrally managed storage solution supporting multiple locations: Backups can be challenging if your company is highly distributed with numerous offices around the country or globe. Having a centralized backup system in one place allows companies to manage backups from anywhere and continuously support their distributed environments, saving time and reducing IT resources.
- Automate storage and backup processes: Using automated storage and backup systems save time otherwise spent managing data, while also decreasing the time it takes to restore data when necessary, translating into reduced IT staff needs, labor and operating costs. Cloud storage solutions such as Amazon S3 or Azure Blob for your backups, snapshots or replicas are reliable, simplify maintenance tasks and are cost-effective, allowing you to only pay for what you use.
- Select a storage system that scales with your business: When choosing a storage solution, you’ll likely pick one that best aligns with your needs and budget constraints. But looking ahead, you’ll also want to select one that can scale as you — and the volume of your data — continue to grow. Having this foresight will likely save you future hardware, software, replacement and implementation costs down the road.
How do you get started with storage monitoring?
Ready to get a better handle on your storage infrastructure? Here’s how to get started with a storage monitoring solution:
- Inventory storage assets: You can’t monitor something you don’t know, so start by making a detailed inventory of your storage assets, including NAS and SAN devices, server-based storage, offsite storage and cloud storage assets. Backup devices like tape systems are often overlooked in this process, but they also need to be included.
- Select and implement a storage monitoring system: Different applications offer different levels of support for the various environments, so ensure that any product you’re considering supports all of them. Ideally you want a monitoring tool that can keep tabs on all of these storage products through a single, unified dashboard.
- Understand your organization’s data usage patterns: As you begin using a storage monitoring system, consider your organization’s data usage habits. A video editing team may require the absolute best performance and low latency, while the accounting team may instead stress the importance of data security and reliability. Use your storage monitoring system to determine user experience and how successfully these needs are being met by following key metrics and establishing minimum performance thresholds where relevant.
- Set up alerts to inform analysts when problems arise: Alerting thresholds can be helpful in raising awareness about emerging problems, whether that’s an overloaded hard drive, a performance slowdown, downtime or an imminent device failure. Triggering systems can open a ticket, send an email or create some other type of notification when thresholds are crossed.
Analyst Report | IDC: A Blueprint for Modern Monitoring
The Bottom Line: Staying on top of your storage systems is essential to the success of your environment
Storage simultaneously represents the most essential and least reliable technology component in the enterprise. It’s also an absolute constant: Although the degree of its importance changes from one organization to the next, no modern business and its associated end users can function without access to reliable storage. Without it, any level of computing simply becomes impossible. Storage monitoring is not nearly as complex an endeavor as other aspects of IT management, but it is nonetheless one of the most essential pieces of the puzzle. When storage is involved, it’s a case of not if but when an error, outage or downtime will occur. Failing to invest in a capable storage monitoring solution will put your business at risk.