Storwize catches up with rivals, adds cube of ML sugar to block storage tooIBM is again playing catch up with rivals by adding a heavy sprinkling of dedupe dust to its near two-year old Storwize arrays and other products.The software, along with the VersaStack converged infrastructure and FlashSystem V9000 all finally get deduplication.This is based on a certain spec: an AFA Storwize V7000F; approximately 700TB usable configuration; and 7.68TB flash drives.As part of the big sell, IBM said it will provide the option to upgrade controllers after 3 years for the cost of ongoing hardware and software maintenance.And it reckoned storage admins can expect 100 percent data availability protection for systems using IBM HyperSwap, deployed by IBM Lab Services.
At a briefing on a Silicon Valley IT press tour in June, Coraid founder Brantley Coile explained the why and how of this, and talked about how he wants Coraid to develop.Back in May, 2015 we learned SouthSuite had bought the original Coraid ATA-over-Ethernet (AoE) intellectual property.The back story is long and involved and starts in 2000, when Coile founded Coraid to provide shared storage access to servers using the block storage AoE access protocol.It saw a fair bit of success – 1,000+ small/medium business customers, $12 million in sales – enough to attract VCs.But every dollar of sales cost Coraid, with its 159 staff headcount, $2 to earn.He is set on developing a stable and reliable business funded on profits from sales and support revenues, and not taking on any debt.
Interview An NVME over fabrics controller-less array is not a SAN because it can’t share data.That was the essence of Datrium CTO Hugo Patterson’s view.Jeff Sosa, head of products at stealthy startup Pavilion Data Systems, has views on this topic and what to do about it.Jeff Sosa: I agree nearly 100 per cent with what he is saying.E8 (and Excelero when it is deployed with a disaggregated storage shelf) implement the JBOF (Just a Bunch Of Flash) architecture where they scale by remotely accessing NVMe drives directly and running software on the hosts/clients to manage it, but you can’t natively share block storage volumes across hosts as a result.It is kind of like the old days when vendors like Fusion-io and Virident (I worked at both) ran thick drivers for accessing direct-attached PCIe Flash Cards in servers.
Analysis What is a SAN and is an NVMe over Fabrics (NVMeF) array a SAN or not?A Storage Area Network (SAN) is, according to Wikipedia, "a network which provides access to consolidated, block level data storage... A SAN does not provide file abstraction, only block-level operations.However, file systems built on top of SANs do provide file-level access, and are known as shared-disk file systems."Techopedia says: "A storage area network (SAN) is a secure high-speed data transfer network that provides access to consolidated block-level storage.Co-founder and CTO Hugo Patterson told a Silicon Valley press tour group: "NVMe in a shared chassis look like an internal drive – so it's not shared data.He's saying that SAN users can share data and drives as well as storage chassis.
Dropbox is significantly expanding its network edge infrastructure designed to drastically improve network syncing speeds.Dropbox network engineer Raghav Bhargava laid out a roadmap that takes advantage of in-house infrastructure and networking partnerships to improve service.In a previous announcement, Dropbox had explained why they were moving away from Amazon Web Services (AWS).According to the post, "Bringing storage in-house allows us to customize the entire stack end-to-end and improve performance for our particular use case.Second, as one of the world’s leading providers of cloud services, our use case for block storage is unique.We can leverage our scale and particular use case to customize both the hardware and software, resulting in better unit economics."
If you’re like most people, computer upkeep isn’t always at the top of your list, and chances are good that you’ve watched your PC’s speed steadily decline ever since you unboxed it.It also doesn’t matter if you’re running Windows 7, 8.1, or 10 — we’ve included easy-to-follow steps for each operating system.Solid-state disks (SSDs) are gaining in popularity as their prices drop, primarily due to their superior speed.Think of it like a messy office where you opened files from your cabinet and placed them haphazardly around the room.Your memory is awesome, and so you can find all the papers you need, but you waste time moving around looking for them.Instead, SSDs have their own optimization technique — known as the TRIM command — which can be performed to rid an SSD of any blocks of data that are no longer needed and keep them in peak operating condition.
No spin: malleable scale-out storage software built for flashA stretchy and scale-out file storage system built for flash and covering the on-premises and public cloud world has been announced by Elastifile.We first heard about Israel-based Elastifile in January a year ago when it pulled in a $35 million B-round.At the time its developing technology seemed like storage nirvana; we wrote: "Enterprise-class, web-scale storage software running on all-flash media and providing file, object and block storage access protocols.Cisco gave it some more money in June 2016.Elastifile now has something to ship and is putting its best product, oh, sorry "solution" foot forward, and it's shod in a bright and polished shoe.
Block storage startup Datera is partnering with Accelerite, and has hired Flavio Santoni as a senior exec in a president's role.It runs as software in x86 servers which have flash or flash and disk storage, and Datera sells a range of all flash and hybrid flash disk appliances if you prefer to buy the whole EDF caboodle from them.Datera says EDF provides provides low latency data access with its flash hardware.It runs on scale out hardware, with web-scale economics according to Datera, and integrates with workload orchestration frameworks such as VMWare, Openstack, Docker, Kubernetes, Mesosphere/DC-OS and Cloudstack.Flavio Santoni has joined Datera, coming from Pazura where he was the chief revenue officer.Other executives who have joined and then left Datera include:
A very common task in the IT industry is needing to convert between storage size units – bytes, kilobytes, megabytes, gigabytes, terabytes, etc.Guest author Brian Smith is an AIX/Linux systems administrator in Colorado.You can follow Brian on Twitter at @brian_smi and see his blog at http://www.ixbrian.com/blogThe solution to all this was that the official definition of a “Gigabyte” is now 1,000,000,000 bytes, and a “Gibibyte” is 1,073,741,824.I don’t know about you, but I have never actually heard another person say the word “Gibibyte”.Throughout the rest of this post I will refer to a gigabyte as 1,073,741,824 bytes as this is the common use among people even if it is incorrect per the textbook definition.
With remarkable timing – Nimble made these claims just hours before the S3 outages, which had knock-on effects for EBS and other services – the storage contender claimed the two cloud giants' infrastructure does not have the availability or reliability needed.Nimble staffer Dimitris Krekoukias quoted Amazon EBS documentation as an example to justify this stance:Amazon EBS volumes are designed for an annual failure rate (AFR) of between 0.1 per cent - 0.2 per cent, where failure refers to a complete or partial loss of the volume, depending on the size and performance of the volume.This makes EBS volumes 20 times more reliable than typical commodity disk drives, which fail with an AFR of around 4%.For example, if you have 1,000 EBS volumes running for 1 year, you should expect 1 to 2 will have a failure.He claimed: “Every single customer I’ve spoken to that has been looking at AWS had never read that link I posted in the beginning, and even if they had they glossed over the reliability part.”
Nimble Storage has launched what it claims is the only enterprise- grade multicloud storage service for running applications in Microsoft Azure and Amazon Web Services (AWS).Anticipating the next wave of applications to the cloud and the stringent requirements that will be placed on storage, Nimble Cloud Volumes offers enterprise-grade availability and data services for […]Nimble Storage has launched what it claims is the only enterprise- grade multicloud storage service for running applications in Microsoft Azure and Amazon Web Services (AWS).Anticipating the next wave of applications to the cloud and the stringent requirements that will be placed on storage, Nimble Cloud Volumes offers enterprise-grade availability and data services for block storage in the cloud.The first wave of applications moving to the cloud saw organisations implement native cloud applications, mostly web and mobile.Now a new wave is taking place in which organisations are starting to migrate test and development instances and even some production instances of traditional workloads — CRM, financial applications, and other business applications — to the cloud.
Because it isn't necessarily storage IO at allBy-passing storage IO stack for an IO lightspeed jumpAnalysis The best IO is... no IO.Windows Server 2016 has code to supercharge data storage IO speed by not treating it as IO anymore.It uses storage-class memory SCM as a persistent store, one that is on the memory bus, close to the CPU, and doesn't lose its contents when power is lost, an NVDIMM-N type device.JEDEC has defined three classes of NVDIMM:
Oracle has set the price point at 20% below the AWS list price.Oracle s latest effort to challenge the cloud market leaders has entered general availability.The Bare Metal Cloud Service, which Larry Ellison, CTO, Oracle first revealed at the OpenWorld conference, has been described as giving the company a technological advantage over Amazon.The offering, an Infrastructure-as-a-Service product that provides a bare metal cloud, servers that have no Oracle software running on them, runs in a virtualised network environment and will deliver services such as network block storage, object storage, VPN connectivity, and Database-as-a-Service.Big Red has made the services available in its US-Southwest region, with the company adding more regions in the future.The region offers fault-independent Availability Domains, which is basically three separate data centre facilities.
Comment Primary Data is providing file services for VSAN and its block-based storage.VSAN is VMware's virtual SAN software, which aggregates storage on connected servers to provide a virtual shared block storage array.It is used in many hyper-converged infrastructure appliance HCIS systems, such as Dell EMC's VxRail.Primary Data's DataSphere software product provides a unified storage abstraction layer across multiple silos - direct-attached, network-attached, private and public cloud storage.It says DataSphere gives VSAN added scale-out file-serving NAS capabilities so a VSAN-based HCIA can also be a scale-out NAS, converging block and file access.Customers no longer need a separate filer.
ScaleIO Ready Node will run on PowerEdge x86 servers.The newly formed Dell EMC has revealed its first product.The software-defined storage ScaleIO Ready Node is an all-flash offering that runs on Dell EMC PowerEdge x86 servers.The product is designed to allow users to quickly deploy a server storage-area network that will fit in with existing legacy infrastructure.Basically ScaleIO is meant to provide the benefits of a storage network, it requires a minimum of three nodes, and it can scale to thousands of server nodes.ScaleIO had previously been made available as part of a VxRack configuration on Quanta servers and while Dell EMC no longer sells the VxRack product ScaleIO lives on.
EMC and Dell only became Dell EMC last week, but they've already managed to squeeze out a product.No new bezel, but ScaleIO gets some space on the On/Off switch.The new offspring is the ScaleIO Ready Node, 13th-generation PowerEdge servers optimised to run the former EMC's ScaleIO software-defined block storage code.To build a ScaleIO rig you need to start with three of these bad boxen, but you can keep going until you hit 1,000 in one logical array.Dell EMC's Hardware Specification Sheet PDF lists all-HDD, hybrid and all-Flash configurations, but to hit the upper limit for storage capacity of 46 terbaytes per node you'll need to go all-Flash in a 24 x 1.92TB configuration.Broadwell Xeons, 12GB/s SAS interfaces to disk and four 10GbE ports round things out.
Coho Data gets closer to Hadoop and containersCoho Data has added concurrent Hadoop Distributed File System HDFS to its block and file access on the DataStream array system, along with multi-tenancy quality of service.Its scale-out DataStream MicroArrays are servers with all-NVMe flash and hybrid flash and disk storage, which can run storage-related applications such as video transcoding in the array.But Coho stresses that they are not hyper-converged systems capable of running application virtual machines.Instead they can be used to provide vSphere storage.DataStream v2.7 added the block storage capability to the existing file storage support.
Analysis Excelero is working on its new NVMesh software to connect shared NVMe SSD storage with accessing servers and their applications.The aim is to deliver a centralised, petabyte-scale, block storage pool with local, directly-connected NVMe SSD access speeds, using commodity server, storage and networking hardware.The company was founded in 2014 by CEO Lior Gal, CTO Yanv Romen, VP Engineering Ofer Ishri and Chief Scientist Omri Mann.It received $20 million in funding from Battery Ventures and Square Peg Capital in 2015.NVMesh intelligent client block driver which runs on accessing servers needing to access the NVMesh logical block volumesNVMesh target module which runs on the shared SSD storage systems to validate initial client-drive connections but not be in the data oath
Israeli startup E8 has launched its rack-scale NVMe over Fabrics E8-D24 array at the Flash Memory Summit, saying it has the storage array Holy Grail, setting up a direct competition with EMC s DSSD product.The Holy Grail is a trifecta of high performance, low cost, and high availability.Its array is aimed at real-time market data analytics, hyper-scale data centres fast block storage, high-performance computing, and SQL and NoSQL database applications.IDC's research director for storage, Eric Burgener, talks of it completely changing the infrastructure density and cost equations.Some speeds and feeds, courtesy of E8's marketing material:70TB in a 2U rack unit available this year and 140TB next year
Software-defined storage company StorPool has launched an upgraded version of its block storage system, which now integrates with CloudStack, the Apache software for public and private Infrastructure-as-a-Service IaaS clouds.According to the company, the new version also brings other improvements including lower CPU usage, up to 30 percent more IOPS, increasing data capacity saving of up to 15 percent, and increased scalability to beyond 1PB.Aimed at service providers, enterprises, and cloud builders, Storpool aggregates the capacity of standard x86 servers into a single shared pool of block storage.It creates a single pool of data storage that allow companies to use "the full capacity and performance of a set of commodity drives", the company says.Along with CloudStack integration, Storpool offers support for OpenStack, OnApp, Docker, and Linux LVM and LXC.StorPool says it has also increased scalability to 20,000 volumes and snapshots per cluster and support for clusters up to 1PB in size in 30TB volumes.
More

Top