Circuit breakers broke bad, workload moved but array flipped out under heavy loadSalesforce.com has revealed that a bug in the firmware of its storage arrays was behind last week's data loss incident.That mess took the company's NA14 instance offline, so it took steps to move it into a Chicago data centre.Once these timeout conditions began, a single database write was unable to successfully complete, which caused the file discrepancy condition to become present in the database.The data loss came about because while Our internal backup processes are designed to be near real-time, however the local copy of the database had not yet completed.Salesforce says the circuit breakers that started the mess passed March 2016 tests, but have been replaced anyway.We do know, thanks to a 2013 post by site reliability engineer Claude Johnson that Salesforce has in the past used ZFS and Solaris-powered servers for storage.
We have had good success with our GPUs in high-performance computing, deep learning in hpc, deep learning, data analytics, remote work stations, said McHugh, a former Cisco executive who joined Nvidia six months ago.The Tesla M10 GPU has high user density when it comes to delivering apps such as Outlook, Office 2016, web browsers, Adobe Photoshop, and the Windows 10 operating system.Delivering business applications in a virtualized way is becoming more challenging because more businesses are using demanding graphics apps these days.The percentage of GPU-accelerated apps has more than doubled in the past five years, with half that growth coming in the first months of 2016 alone, according to a study by Lakeside Software.To provide the best user experience, these applications increasingly use OpenGL and DirectX APIs, as well as graphics technology from the data center.While the need for advanced GPU technology has commonly been associated with the usage of 3D applications, as enterprises make the move to software like Windows 10, Office 365, and other SaaS and web apps, IT departments will increasingly seek the benefits of GPU acceleration to provide everyday business tools to all of their users, said Robert Young, analyst for IT Service Management and Client Virtualization Software at IDC, in a statement.Nvidia is teaming up with virtualization software companies, such as Citrix and VMware, to deliver a high-end virtualized app that runs as if it were being processed on a user s personal machine.The cost of running such virtual apps or remote desktop sessions is now down to less than $2 a month per user and, for virtual PCs, is less than $6 a month per user.The new Nvidia Grid software is available worldwide today, and the Tesla M10 will be generally available in the fall.Virtualized apps can now be delivered at a subscription price of about $10 per concurrent user, McHugh said, on the Nvidia Grid service.
Credit: GoogleForget the CPU, GPU, and FPGA, Google says its Tensor Processing Unit, or TPU, advances machine learning capability by a factor of three generations.This is roughly equivalent to fast-forwarding technology about seven years into the future three generations of Moore s Law , the blog said.Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly.The tiny TPU can fit into a hard drive slot within the data center rack and has already been powering RankBrain and Street View, the blog said.Analyst Patrick Moorhead of Moore Insights & Strategy, who attended the I/O developer conference, said, from what little Google has revealed about the TPU, he doesn t think the company is about to abandon traditional CPUs and GPUs just yet.He likened the comparison to decoding an H.265 video stream with a CPU versus an ASIC built for that task.
But now, on its help page, Salesforce has issued an apology for the disruption and has moved to reassure customers that if a similar event were to occur in future, the disruption would be resolved much more quickly.The root cause of the issue was that a circuit breaker responsible for controlling power to the Washington datacenter failed on 9 May."The breakers are used to segment power from the data center universal power supply ring and direct the power into the different rooms.This was only the start - in an effort to restore service to the NA14 instance as quickly as possible, the team then moved it from its primary data center Washington to its secondary data center in Chicago.All functionality was restored to the NA14 instance, including sandbox copy and weekly export functionality, on 15 May.From that investigation, corrective steps will be determined and implemented," said Salesforce.
Google has begun to use computer processors its engineers designed to increase the performance of the company s artificial intelligence software, potentially threatening the businesses of traditional chip suppliers such as Intel Corp. and Nvidia Corp.During the past year, Google has deployed thousands of these specialized artificial intelligence chips, called TensorFlow Processing Units, in servers within its data centers, Urs Holzle, the company s senior vice president of infrastructure, told reporters Wednesday at the company s developer conference.Google declined to specify precisely how many of the chips it s using, but stressed the company continues to use many typical central processing units and graphics processing units made by other companies.It s been in pretty widespread use for about a year.Google has no plans to sell the specialized chips to third-parties, said Diane Greene, Google s senior vice president of cloud.Google and other big data-center operators are the largest consumers of server processors, the main engine of growth and profit for Intel, the world s biggest semiconductor maker.Graphics maker Nvidia is also pinning much of its future growth ambitions on the bet that its chips will have a larger role to play in data processing, including artificial intelligence and machine learning.Google s chip connects to computer servers via a protocol called PCI-E, which means it can be slotted into the company s computers, rapidly augmenting them with faster artificial-intelligence capabilities.As the field matures, Google might very well build more specialized processors for specific AI tasks, he said.Over time, Google expects to design more system-level components, Holzle said.Even Nvidia, which makes traditional graphics processing units that have been adopted for machine learning, is beginning to add more custom elements to its hardware.
Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly.Intriguingly, Jouppi says a board with a TPU fits into a hard disk drive slot in our data center racks.He also says Google moved from first tested silicon to using the silicon in production within 22 days.Performance is "an order of magnitude better-optimized performance per watt for machine learning," Jouppi says, or "roughly equivalent to fast-forwarding technology about seven years into the future three generations of Moore s Law ."Jouppi also sheds light on all those reports about Google developing silicon, writing ... great software shines brightest with great hardware underneath.TPUs therefore can account for some of Google's ads that ask for silicon designers.
So said Ed Healy, CEO at data centre management company RF Code.He said that it is essential for a provider to understand their total capacity from a power perspective to keep customers updated of how much of it they are using and when.This capacity planning will also help data centres improve their scalability and flexibility by giving them insights into business needs.According to Paul Lewis, data centre manager at colo Aegis Data, "the growth of IoT will force colocation facilities to adopt greater flexibility".Lewis said: "In order to accommodate the sheer volume of IoT we will see clients increasingly demanding greater scalability capabilities from their providers.In the commercial data centre sector IOT is changing customer expectations.
Microsoft is stepping up its commitment to reduce the impact its data centers have on the environment, with a goal to use 50 percent renewable energy by 2018.A wind farm in Norfolk, EnglandAs more services move to the cloud, online giants are building more data centers to keep up.On Thursday, Microsoft said it will step up its commitment to reduce the impact its data centers have on the environment.Across the tech sector, we need to recognize that data centers will rank by the middle of the next decade among the largest users of electrical power on the planet, Brad Smith, the company's president and chief legal officer, said in a blog post.Today, roughly 44 percent of the electricity used by Microsoft s data centers comes from renewable energy sources, he said.And it's experimenting with undersea data centers, which may be able to tap into offshore wind farms and require less energy for cooling.
Despite the large numbers, this data doesn't show that there were any net neutrality violations.The FCC's website notes that the agency doesn't verify the facts in each complaint; these are just raw numbers based on the categories selected by customers when they file complaints.But complaints can be useful for customers, particularly for billing problems, because ISPs are required to respond to each one within 30 days.Previously, getting detailed statistics required filing public records requests, which we did for our "Complaint factory" article.The Consumer Complaint Data Center provides a broader look at the types of complaints the FCC receives, but it doesn't show the text of complaints.Internet service and pay-TV providers are rated more poorly by customers than any other industry measured by the American Customer Satisfaction Index—below airlines, health insurers, utilities, banks, and many other types of companies.
Google's Tensor Processing Unit TPU fits in a hard-drive slot of a server and is claimed to accelerate TensorFlow applications equivalent to skipping three generations of Moore's Law.Advertising giant Google has unveiled a custom processor developed to speed up its TensorFlow machine learning platform: the Tensor Processing Unit, or TPU.'We started a stealthy project at Google several years ago to see what we could accomplish with our own custom accelerators for machine learning applications.'TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation.Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly.The TPU is far from a lab experiment, too: impressively, Google went from first tested silicon to running applications within its data centres in just 22 days and currently accelerates products from Street View and RankBrain to the machine intelligence which bested Go champion Lee Sedol recently.
Best of Swiss Web/obs via AP ImagesGoogle has built its own computer chip.It threatens the future of commercial chip makers like Intel and nVidia—particularly when you consider Google s vision for the future.According to Urs Hölzle, the man most responsible for the global data center network that underpins the Google empire, this new custom chip is just the first of many.Hölzle declined to go into specifics on how exactly Google was using its TPUs, except to say that they handles part of the computation needed to drive voice recognition on Android phones.Moorhead wonders if the new Google TPU is overkill, pointing out that such a chip takes at least six months to build—a long time in the incredibly competitive marketplace in which the biggest Internet companies compete.Asked why Google built its chip from scratch rather than using an FPGA, Hölzle said: It s just much faster.
C-level briefing: VR is invading the world, but data centres are still learning how to use it for their benefit - CBR asks Aegis Data's CEO how VR will boost the hosting business.However, despite data centres being the force that motors the content visualised in every headset, they still have not yet taken VR 100% into their operations.This comes depsite the huge opportunity and potential VR could bring to data centre management - a view shared by Greg McCulloch, CEO at enterprise data centre services provider Aegis Data, who believes the data centre industry has the potential to be the biggest benefactor from the rise in VR.Just four years ago, HPC required 10kw racks, but now the demand has increased ranging all the way up to 30kw.It is also crucial that facilities are future-proofed for such disrupting technologies, and that is "another key challenge for data centre organisations"."Many claim to provide high performance racks but that equals far more infrastructure demands and therefore additional costs.
£439m gets ST Telemedia 17 data centres across key Indian and Singaporean cities, including Delhi, Mumbai and BangaloreThe Asian data centre market has undergone a massive shift this week as Tata Communications offloaded 74 percent of its Indian and Singaporean data centre business to ST Telemedia.ST Telemedia, one of Singapore s largest communications companies, has bought the stake for $640 million £439m and marks the latest expansion of the company s data centre operations.Key citiesCustomers of these data centres include e-commerce platforms, global multi-national companies, and some of Asia s largest blue chip businesses.Since ST Telemedia s initial investment in the data centre business in mid-2014, we have made remarkable progress in building a formidable data centre footprint internationally, said ST Telemedia executive director Sio Tat Hiang.The latest addition of India to the STT GDC network will be a major impetus to advance the company s ambition to be a significant global data centre service provider.This is thanks to a massive push towards cloud and IaaS workloads from small and medium businesses in the subcontinent, boosting spend on cloud storage.
submarine cable between Finland and Germany was opened on Thursday. Frankfurt is a major European hub for network traffic, which has made a wealth of data center investments. Feedback on fire, that opened the eyes of many operator that now you can offer cross-border services, where the connections between Finland and Germany will improve and speed up. It enables a diversified range of services. Sure, Netflix and other online services with regard to it is a significant matter, the international parts of the network connection speed is improved by about 25 percent. Domestic operators from doing work in addition to improve capacity, says Joensuu.
Analysis: Cloud strategies from migration to mitigation and financial services running agile development in the cloudIndustry watchers say numbers show accelerated cloud take up but users continue to express concerns about security, flexibility, control and compliance.Says Bob Welton, senior director NTT Communications, "we've seen the early days of people moving cloud ready workloads into hyper scale data centres and public cloud providers.Though not generally known for its bleeding edge technology adoption, the advantages of containerisation and micro services is generating a lot of activity within Financial Services technology and this is even pushing the cloud agenda within FS organisations.In two or three years we'll see firms saying we've got a massive workload to move to hybrid cloud and that they'll be looking for is to improve risk mitigation from cloud suppliers and for cloud assurance from across the globe.From a data perspective it can appear to be quite a segmented market.Does it ensure the necessary compliance for your industry or market in areas such as security and risk.
The project, drawing on a database of 10 billion existing images, is designed to to train systems to help doctors detect cancer, Alzheimer s and other diseases earlier and more accurately.Internet giants such as Google Inc., Facebook Inc., FB 0.46 % Microsoft Corp. MSFT 0.60 % , Twitter Inc. and Baidu Inc. BIDU 0.91 % are among the most active, using the chips called GPUs to let servers study vast quantities of photos, videos, audio files and posts on social media to improve functions such as search or automated photo tagging.There is no way that existing chip architectures will be right in the long term, said Jeff Hawkins, co-founder of Numenta, a company started 11 years ago to work on brain-like forms of computing.But some argue that GPUs simply aren t as efficient as those designed from scratch for machine learning.Some companies, like Nervana and Movidius, emulate the parallelism of GPUs but focus on moving data more quickly and dispensing with features needed for graphics.Others, like International Business Machines Corp. IBM 1.60 % with a chip dubbed TrueNorth, have developed chip designs inspired by the neurons, synapses and other features of the brain.
Dropbox has announced its latest move to woo Europeans with its cloud-based file-hosting service, with the launch of a new office in Germany to cater to the DACH region — namely Switzerland, Austria, and of course Germany.As a result of this highly competitive field, questions have emerged about Dropbox s longer-term viability, and such concerns haven t been entirely without merit — the company shuttered a couple of apps last year, and it reportedly cut-back on a number of employee perks lately.But it has also been on a major product development push of late — it launched Project Infinite, which shows all company files locally while storing them remotely, introduced support for Facebook Messenger, and rolled out a cheaper pricing plan for educational institutions.However, around three-quarters of Dropbox s 500-million-plus user-base is based outside the U.S., with a significant portion of those in Europe, which is why the company is continuing to double-down on its efforts on the continent.One in three internet users in DACH are now on Dropbox, and they ve created over 163 million connections to date by sharing documents and folders, said Thomas Hansen, global vice president for revenue at Dropbox, in a blog post.But converting free users into paid users is a perennial challenge for most businesses that adopt a freemium business model, so to help reduce that friction it launched localized payments last year, kicking off in 12 European markets.This effectively saw Dropbox move beyond bank cards, PayPal, and Discover, and into direct debit, which is a popular way of setting up recurring payments in Europe.Dropbox s move to open a base in Germany is notable for one over-arching reason.
Microsoft commits to reducing reliance on carbon offset certificates and aims to boost direct renewable energy use for the futureMicrosoft has committed to boosting the amount of pure renewable energy it uses for its global data centres, setting a goal of increasing the wind, solar, and hyrdropower energy it purchases directly through energy grids to 50 percent by 2018.Whilst Microsoft claims its data centre operations have been carbon-neutral since 2012, a sizable lump of Microsoft s carbon credentials have been purchased through renewable energy certificates, which effectively offset carbon emissions.As we move forward, we will continue to purchase renewable energy certificates to ensure we reduce our carbon emissions to zero, caveated Microsoft s chief legal officer Brad Smith.Currently, only 44 percent of Microsoft s total data centre power is generated by wind, solar and hydropower sources.Microsoft is also set to launch two new data centre regions in the UK later this year, just after the launch of a dedicated German Azure region next month.Take our cloud quiz here!
The Trustworthy Accountability Group TAG announced Monday that it is launching its Certified Against Fraud certification program.Initially announced in October, over thirty ad tech and agency partners have now signed on to participate in the initiative aimed rooting out fraud in digital advertising: Amobee, AppNexus, Collective, comScore, DoubleVerify, Dstillery, engage:BDR, Exponential, Forensiq, Horizon Media, Index Exchange, Integral Ad Science, Interpublic Group, MediaMath, Moat, ndp, News Corp, Omnicom Group, OpenX, Publicis Worldwide, RhythmOne, Rocket Fuel, Rubicon Project, Sociomantic, sovrn, SpotX, TubeMogul, White Ops, WPP, Yahoo, and Zemanta.As more TAG anti-fraud seals are awarded, the cracks in our industry exploited by bad actors will also be sealed against their criminal endeavors.Advertisers, authorized advertiser agents and other direct buyers must have a designated TAG compliance officer and comply with the Media Rating Council s Invalid Traffic IVT Detection and Filtration Guidelines.Ad networks and other indirect buyers and sellers must also fulfill all steps required of buyers, domain list filtering, data center IP list filtering, and TAG s Payment ID protocol.Every dollar spent on a fraudulent ad is a dollar that is stolen from marketers, said Bob Liodice, President and CEO of the Association of National Advertisers ANA .
Xilinx, Mellanox also join CCIX Consortium to give data centre CPUs an open sharing architecture to boost speed and efficiencyA High Council of seven data centre heavyweights has forged a new consortium to develop a platform that will let the processors from different vendors work together whilst sharing the same memory.Representing a milestone in the industry, a single interconnect technology is being developed that will provide exactly that open framework, said Qualcomm.Does it matter to your business whether your data is stored in the EU?Applications such as big data analytics, search, machine learning, NFV, wireless 4G/5G, in-memory database processing, video analytics, and network processing, benefit from acceleration engines that need to move data seamlessly among the various system components.NVIDIA also has its own technology, NVLink, which boost connectivity between GPUs and IBM POWER.Still, the alliance of vendors who have chosen to work together is a wise move, especially ahead of the Intel acquisition of Altera that will increase its accelerator technology, as noted by Moore Insights and Strategy analyst Karl Freund.