Category Archives: Cloud Servers
Extended VPN deal: 73% off and free cloud storage with IPVanish VPN – TechRadar India
It was a VPN offer that was supposed to end the second that the calendar turned from March to April, but we're delighted that IPVanish has extended its fabulous VPN deal for another 30 days.
If you sign up to IPVanish before the end of April, you'll get a whole year of VPN protection and secure cloud storage from SugarSync for just $39.
When it comes to VPN goodness, we rank IPVanish extremely highly - the provider has 24/7 customer support, zero traffic logs, unlimited bandwidth and an excellent Windows kill switch. It really is one of the very best around.
And then throw in that freebie and discount, and you're laughing. The SugarSync addition gets you a full 250GB of secure data storage. This means that all your photos, videos and personal documents (whatever you choose to store) will remain safeguarded from outsiders. That means that for the next 12 months your VPN and storage needs are completely covered for the equivalent of just $3.25 a month.
Still unsure if this is the deal for you? Scroll down to see this deal in full, or why not also check out our best VPN deals guide for all of the very best offers on cyber privacy.
As well as unblocking Netflix, (hello streaming!) and being one of the best value for money VPNs, it also has a 7-day money-back guarantee and servers in over 75 countries.
Plus, it boasts incredible download speeds so you don't need to worry about the VPN slowing down your device and it's got plenty of powerful, configurable apps.So whether privacy, streaming or cost is your reason for getting a VPN, IPVanish ticks all the boxes.
Still undecided? Check out our IPVanish review.
Everything - the #1 best VPN
Torrenting and P2P traffic
See more here:
Extended VPN deal: 73% off and free cloud storage with IPVanish VPN - TechRadar India
Does the US CLOUD Act hang darkly over your data privacy? – The Register
Webcast Heres something that you may not know, something the cloud companies are not keen to shout about too loudly.
The recently enacted Clarifying Lawful Overseas Use of Data (CLOUD) Act in the US allows federal law enforcement to access electronic communications data stored on the servers of all the major American cloud companies in the pursuit of information relevant to a criminal investigation.
That applies even if those servers are anywhere in the world, not just in the States. Whats more, if the FBI decides to nose through your data, they dont even need to tell you.
You may think none of this matters because you are shielded by Europe's General Data Protection Regulation (GDPR). In force since May 2018, the GDPR aims to unify the EUs regulatory environment, and also gives control to individuals over their personal data. It means any cloud provider that complies with US law and allows the FBI to nose around in your data risks breaching the GDPR. And in that case, will your cloud provider side with you or with the US government?
If this is news to you and setting off alarm bells, you can find out where you stand by tuning in to this webcast, brought to you by web hosting company Ionos, starting at 1100 BST on 15 April.
In conversation with The Regs Tim Phillips, Sab Knight, head of sales UK at Ionos, and Robert Healey, founder of Relentless Data Privacy, will help you discover:
Find out more and sign up for the webcast right here.
Sponsored: Practical tips for Office 365 tenant-to-tenant migration
More here:
Does the US CLOUD Act hang darkly over your data privacy? - The Register
Google Cloud Engine outage caused by ‘large backlog of queued mutations’ – The Register
A 14-hour Google cloud platform outage that we missed in the shadow of last week's G Suite outage was caused by a failure to scale, an internal investigation has shown.
The outage, which occurred on 26 March, brought down Google's cloud services in multiple regions, including Dataflow, Big Query, DialogFlow, Kubernetes Engine, Cloud Firestore, App Engine, and Cloud Console. The systems were affected for a total of 14 hours.
The outage was caused by a lack of memory in the company's cache servers, according to an internal investigation by the company published today. "The trigger of the incident was a bulk update of group memberships that expanded to an unexpectedly high number of modified permissions, which generated a large backlog of queued mutations to be applied in real-time," the investigation said.
"The processing of the backlog was degraded by a latent issue with the cache servers, which led to them running out of memory; this in turn resulted in requests to IAM timing out. The problem was temporarily exacerbated in various regions by emergency rollouts performed to mitigate the high memory usage."
Google resolved the issue by installing more memory into the cache servers and restarting them. But by this point, a heap of stale data had built up, which led to further issues which system engineers had to battle with for several more hours. The systems were back up and operating at 05:55AM UTC the following morning.
In response to the issues, Google said that it is "ensuring that the cache servers can handle bulk updates of the kind which triggered this incident" and that "efforts are underway to optimize the memory usage and protections on the cache servers, and allow emergency configuration changes without requiring restarts."
"To allow us to mitigate data staleness issues more quickly in future, we will also be sharding out the database batch processing to allow for parallelization and more frequent runs. We understand how important regional reliability is for our users and apologize for this incident."
Sponsored: Practical tips for Office 365 tenant-to-tenant migration
See the original post:
Google Cloud Engine outage caused by 'large backlog of queued mutations' - The Register
Zoom privacy and security issues: Here’s everything that’s wrong (so far) – Tom’s Guide
UPDATED April 3 with additional issues.
UPDATED with details of blog post by Zoom's founder and CEO spelling out fixes Zoom has made and pledge to lock down development for 90 days to find and fix security and privacy flaws, and with blog post by Zoom's chief product officer regarding Zoom's use of end-to-end encryption.
Are you using Zoom yet? It seems that everyone in America who's been forced to work, or do schoolwork, from home during the coronavirus lockdown is using the video-conferencing platform for meetings, classes and even social gatherings.
There are good reasons Zoom has taken off and other platforms haven't. Zoom is easy to set up, easy to use and lets up to 100 people join a meeting for free. It just works.
But there's a downside. Zoom's ease of use makes it easy for troublemakers to "bomb" open Zoom meetings, and for hackers to inject malware into a machine running Zoom. There's also been a lot of scrutiny about Zoom's privacy policy, which until recently seemed to give Zoom the right to do whatever it saw fit with any user's personal data.
Given the soaring usage of the Zoom platform during the coronavirus lockdown, and the near-doubling of its stock price since the beginning of February, Zoom has come under intense scrutiny from security professionals and privacy advocates. And boy, have they found stuff.
We've already mentioned that anyone can "bomb" a public Zoom meeting if they know the meeting number, and then use the file-share photo to post shocking images, or make annoying sounds in the audio. The FBI even warned about it a few days ago.
The host of the Zoom meeting can mute or even kick out troublemakers, but they can come right back with new user IDs. The best way to avoid Zoom bombing is to not share Zoom meeting numbers with anyone but the intended participants. You can also require participants to use a password to log into the meeting.
STATUS: There are easy ways to avoid Zoom bombing, which we go through here.
Zoom meetings have side chats in which participants can sent text-based messages and post web links.
But according to Twitter user @_g0dmode and Anglo-American cybersecurity training firm Hacker House, Zoom makes no distinction between regular web addresses and a different kind of remote networking link called a Universal Naming Convention (UNC) path.That leaves Zoom chats vulnerable to attack.
If a malicious Zoom bomber slipped a UNC path to a remote server that he controlled into a Zoom meeting chat, an unwitting participant could click on it.
The participant's Windows computer would then try to reach out to the hacker's remote server specified in the path and automatically try to log into it using the user's Windows username and password.
The hacker could capture the password "hash" and decrypt it, giving him access to the Zoom user's Windows account.
UPDATE: Yuan's blog post says Zoom has now fixed this problem.
STATUS: Fixed, apparently.
Mohamed A. Baset of security firm Seekurity said on Twitter that the same flaw also lets a hacker insert a UNC path to a remote executable file into a Zoom meeting chatroom.
If a Zoom user running Windows clicks on it, a video posted by Baset showed, the user's computer will try to load and run the software. The victim will be prompted to authorize the software to run, which will stop some hacking attempts but not all.
STATUS: If the UNC filepath issue is fixed, then this should be as well.
Until last week, Zoom sent iOS user profiles to Facebook as part of the "log in with Facebook" feature in the iPhone and iPad Zoom apps. After Vice News exposed the practice, Zoom said it hadn't been aware of the profile-sharing and updated the iOS apps to fix this.
STATUS: Fixed.
Zoom claims that its meetings use "end-to-end encryption" if every participant calls in from a computer or a Zoom mobile app instead of over the phone.But under pressure from The Intercept, a Zoom representative admitted that Zoom's definitions of "end-to-end" and of "endpoint" is a bit different from everyone else's.
"When we use the phrase 'End to End'," a Zoom spokeperson told The Intercept, "it is in reference to the connection being encrypted from Zoom end point to Zoom end point."
Sound good, but the spokesperson clarified that he counted a Zoom server as an endpoint. Every other company considers a user device -- a desktop, laptop, smartphone or tablet -- as an endpoint, but not a server.
In other words, the data is encrypted when it travels from a Zoom client application on a computer or mobile device (an endpoint, in networking lingo) to a Zoom server, or vice versa.It's decrypted at the server, and Zoom can see and hear it.
Every other company uses "end-to-end" to mean fully encrypted from one endpoint to another. When you send an Apple Message from your iPhone to another iPhone user, Apple's servers help the message get from one place to another, but they can't read the content.
Not so with Zoom. It can see whatever is going on in its meetings, and it pretty much has to in order to make sure everything works properly. Just don't believe the implication that it can't.
UPDATE: In a blog post April 1, Zoom Chief Product Officer Oded Gal wrote that "we want to start by apologizing for the confusion we have caused by incorrectly suggesting that Zoom meetings were capable of using end-to-end encryption. "
"We recognize that there is a discrepancy between the commonly accepted definition of end-to-end encryption and how we were using it," he wrote.
Gal assured users that all data sent and received by Zoom client applications (but not regular phone lines, business conferencing systems or, presumably, browser interfaces) is indeed encrypted and that Zoom servers or staffers "do not decrypt it at any point before it reaches the receiving clients."
However, Gal added, "Zoom currently maintains the key management system for these systems in the cloud" but has "implemented robust and validated internal controls to prevent unauthorized access to any content that users share during meetings."
The implication is that Zoom doesn't decrypt user transmissions -- but because it holds the encryption keys, it could if it had to.
For those worried about government snooping, Gal wrote that "Zoom has never built a mechanism to decrypt live meetings for lawful intercept purposes, nor do we have means to insert our employees or others into meetings without being reflected in the participant list."
And he added that companies and other enterprises would soon be able to handle their own encryption process.
"A solution will be available later this year to allow organizations to leverage Zooms cloud infrastructure but host the key management system within their environment."
STATUS: This is an issue of misleading advertising rather than an actual bug. We hope Zoom stops using the term incorrectly.
We learned last summer that Zoom used hacker-like methods to bypass normal macOS security precautions. We thought that problem had been fixed along with the security flaw it created.
But a series of tweets March 30 from security researcher Felix Seele, who noticed that Zoom installed itself on his Mac without the usual user authorization, reveals that there's still an issue.
"They (ab)use preinstallation scripts, manually unpack the app using a bundled 7zip and install it to /Applications if the current user is in the admin group (no root needed)," Seele wrote.
"The application is installed without the user giving his final consent and a highly misleading prompt is used to gain root privileges. The same tricks that are being used by macOS malware."(Seele elaborated in a more user-friendly blog post here.)
Zoom founder and CEO Eric S. Yuan tweeted a friendly response.
"To join a meeting from a Mac is not easy, that is why this method is used by Zoom and others," Yuan wrote. "Your point is well taken and we will continue to improve."
UPDATE: In a new tweet April 2, Seele said Zoom had released a new version of the Zoom client for macOS that "completely removes the questionable 'preinstall'-technique and the faked password prompt."
"I must say that I am impressed. That was a swift and comprehensive reaction. Good work, @zoom_us!" Seele added.
STATUS: Fixed.
Plenty of "others" could indeed use Zoom's dodgy installation methods, renowned Mac hacker Patrick Wardle said in a blog post March 30.
Wardle demonstrated how a local attacker -- such as a malicious human or already-installed malware -- could use Zoom's magical powers of unauthorized installation to "escalate privileges" and gain total control over the machine without knowing the administrator password.
Wardle also showed that a malicious script installed into the Zoom Mac client could give any piece of malware Zoom's webcam and microphone privileges, which do not prompt the user for authorization and could turn any Mac with Zoom installed into a potential spying device.
"This affords malware the ability to record all Zoom meetings, or simply spawn Zoom in the background to access the mic and webcam at arbitrary times," Wardle wrote.
UPDATE: Yuan's blog post says Zoom has fixed these flaws.
STATUS: Fixed.
Zoom automatically puts everyone sharing the same email domain into a "company" folder where they can see each other's information.
Exceptions are made for people using large webmail clients such as Gmail, Yahoo, Hotmail or Outlook.com, but not apparently for smaller webmail providers that Zoom might not know about.
Several Dutch Zoom users who use ISP-provided email addresses suddenly found that they were in the same "company" with dozens of strangers -- and could see their email addresses, user names and user photos.
STATUS: Unknown.
Several privacy experts, some working for Consumer Reports, pored over Zoom's privacy policy and found that it apparently gave Zoom the right to use Zoom users' personal data and to share it with third-party marketers.
Following a Consumer Reports blog post, Zoom quickly rewrote its privacy policy, stripping out the most disturbing passages and asserting that "we do not sell your personal data."
STATUS: Unknown. We don't know the details of Zoom's business dealings with third-party advertisers.
Does all this mean that Zoom is unsafe to use? No.
You just need to be aware that the Zoom software creates a huge "attack surface," as security professionals like to say, and that hackers are going to try to come at it every way they can. They're already registering lots of Zoom-related phony domains and developing Zoom-themed malware.
The upside is that if lots of flaws in Zoom are found now and fixed soon, then Zoom will be the better -- and safer -- for it.
"Zoom will soon be the most secure conferencing tool out there," wrote tech journalist Kim Zetter on Twitter April 1. "But too bad they didn't save themselves some grief and engage in some security assessments of their own to avoid this trial by fire."
In a blog post April 1, Zoom CEO and founder Eric S. Yuan acknowledged Zoom's growing pains and pledged that all regular development of the Zoom platform would be put on hold while the company worked to fix security and privacy issues.
"We recognize that we have fallen short of the community's -- and our own -- privacy and security expectations," Yuan wrote, explaining that Zoom was originally developed for large businesses that had in-house IT staffers who could set up and run the software.
"We now have a much broader set of users who are utilizing our product in a myriad of unexpected ways, presenting us with challenges we did not anticipate when the platform was conceived," he said. "These new, mostly consumer use cases have helped us uncover unforeseen issues with our platform. Dedicated journalists and security researchers have also helped to identify pre-existing ones."
To deal with these issues, Yuan wrote, Zoom would be "enacting a feature freeze, effectively immediately, and shifting all our engineering resources to focus on our biggest trust, safety, and privacy issues."
Among other things, Zoom would also be "conducting a comprehensive review with third-party experts and representative users to understand and ensure the security of all of our new consumer use cases."
Privacy researcher Patrick Jackson noticed that Zoom meeting recordings saved to the host's computer generally get a certain type of file name. So he searched unprotected cloud servers to see if anyone had uploaded Zoom recordings and found more than 15,000 unprotected examples, according to The Washington Post. Jackson also found some recorded Zoom meetings on YouTube and Vimeo.
This isn't really Zoom's fault. It's up to the host to decide whether to record a meeting, and Zoom gives paying customers the option to store recordings on Zoom's own servers.
If you host a Zoom meeting and decide to record it, then make sure you change the default file name after you're done.
STATUS: Not really Zoom's problem, to be honest.
You can find open Zoom meetings by rapidly cycling through possible Zoom meeting IDs, a security researcher told independent security blogger Brian Krebs.
The researcher got past Zoom's meeting-scan blocker by running queries through Tor, which randomized his IP address. It's a variation on "war driving" by randomly dialing telephone numbers to find open modems in the dial-up days.
The researcher told Krebs that he could find about 100 open Zoom meetings every hour with the tool, and that "having a password enabled on the [Zoom] meeting is the only thing that defeats it."
STATUS: Unknown.
Two Twitter users pointed out that if you're in a Zoom meeting and use a private window in the meeting's chat app to communicate privately with another person in the meeting, that conversation will be visible in the end-of-meeting transcript the host receives.
STATUS: Unknown.
Today's best Webcams deals
Creative Labs 73VF070000000...
Microsoft 6CH-00001 LifeCam...
New Logitech HD Pro Webcam...
Logitech C920 Webcam - 30 fps...
Read the rest here:
Zoom privacy and security issues: Here's everything that's wrong (so far) - Tom's Guide
Oracle teases prospect of playing nicely with open-source Java in update to WebLogic application server – The Register
Oracle has chosen this week of all weeks to foist on the world an update of its application server WebLogic, festooned with new features addressing Java EE 8, Kubernetes and JSON.
But the most eye-catching prospect is compatibility with the Eclipse Foundation's fully open-source Java development environment, Jakarta EE 8.
Back in Sepember when the Java EE specs were made public, Mark Little, Red Hat's JBoss CTO, said: "Existing Java EE 8 applications and developers can be confident they can move their applications seamlessly to the Eclipse Foundation effort." And Tom Snyder, veep of Oracle Software Development, promised application server support would follow. "This represents the culmination of a great deal of work by the entire Jakarta EE community, including Oracle. Oracle is working on delivery of a Java EE 8 and Jakarta EE 8 compatible WebLogic Server implementation, and we are looking forward to working with the community to evolve Jakarta EE for the future."
With the release of WebLogic Server 14.1.1, that support for open-source Java has come. Almost.
In a blog announcing the availability of the update to the Oracle application server, Will Lyons, Oracle senior director of product development, teased: "We are currently testing Oracle WebLogic Server for Jakarta EE 8 compatibility as well, and should have results soon."
Elsewhere, the new API Servlet 4.0 includes HTTP/2 support, which Lyons said provided "improved application performance with compatibility for existing Web applications", while API JAX-RS 2.1 "advances REST services support by offering a reactive client programming model".
In terms of environments, there's support and tooling for running Oracle WebLogic Server in containers and Kubernetes, and certification on Oracle Cloud.
For data pipelines, the release supports JSON-P 1.1 and JSON-B 1.0 standards to bring new capabilities for processing JSON documents. "These improvements expand support for building modern applications using the standards-based, proven Java EE platform," Oracle said.
"We integrate with a wide variety of platforms and Oracle software that deliver high performance and availability for your applications, with low cost of ownership," Lyons claimed.
Whether the cost equation adds up is a matter for Oracle's interesting strategy on licensing software. However, developers might welcome the opportunity to build applications in fully open-source Java and deploy them in Oracle's sparkly new application server.
Sponsored: Webcast: Build the next generation of your business in the public cloud
Originally posted here:
Oracle teases prospect of playing nicely with open-source Java in update to WebLogic application server - The Register
Edge Computing The Future IoT Solution – Electropages
The field of IoT has seen a dramatic rise of internet technologies being integrated into everyday life. However, the lack of security has resulted in social pressure and government action forcing designers to implement stronger security features. How can edge computing help and why may it become the ultimate solution for IoT in the future?
Since their introduction, IoT devices have exploded globally with an estimated number of at least 20 billion globally. While the term Internet of Things, or IoT, is a relativity new term the use of internet related technologies dates back to the creation of the internet itself. But the IoT movement is more concerned with simple devices that traditionally would not have internet capabilities (such as sensors and data loggers) which is why it is considered a separate sector to standard internet computing technologies such as computers, laptops, and phones.
The first IoT devices were simple in nature and often targeted for niche markets including basic remote temperature and humidity logging. As the nature of the data that was being gathered was benign in nature (i.e. not sensitive), security was given the minimal concern with many devices using default passwords and unencrypted messaging protocols. Since the number of IoT devices at the beginning was minimal combined with the lack of capabilities these devices went unnoticed by security experts, cybercriminals, and governments alike. But all of this changed as technology improved, devices became more intelligent, and the nature of the data being gathered became more sensitive.
One technology that has accelerated thanks to the IoT sector is AI thanks to the unimaginable quantities of data provided by IoT devices. AI systems are being used to power many modern tasks that are otherwise too difficult or too varied to be programmed traditionally using if statements and switch cases to account for every possibility. Such examples would include speech recognition, voice recognition, image recognition, intelligent search results, and personalised assistants. As stated previously, the first data types gathered by IoT were benign in nature including temperature and humidity which could be use to create intelligent systems that can respond to those environmental stimuli. But designers quickly realised that with the advancements in microcontroller technologies (for example, the shift from 8-bit to 32-bit ARM) more complex data types could be gathered including audio and visual. Such systems could be used to create advanced AI IoT devices that could not only gather data about their surroundings but send this data to a cloud-based AI system which can learn from the data and provide better results in the future. For example, the Amazon Echo is an IoT device that submits spoken user requests to a cloud-system which is analysed for both performing the request as well as improving the AI for future use. Very quickly, IoT devices exploded globally containing a whole range of integrated features ranging from accelerometers, magnetometers, motion sensors, cameras, and microphones. But the speed at which these devices were being designed and put to market was far too great and this is where cybercriminals began to take advantage.
The speed at which IoT designs changed as well as the sudden increase in demand for IoT devices saw engineers turn around products in record time. This, combined with the inability of governments to respond to fast changing markets as well as the short-sightedness of designers quickly saw many billions of devices on the market that contained insufficient security measures while handling highly sensitive data. It would not be long before cybercriminals used the many weaknesses of IoT devices to perform malicious activities including DDoS attacks, crypto-mining, blackmail, and data selling. Devices on the market would either have a default password or no password, would not use encrypted messaging protocols, would be built on unsecured silicon technologies, or would leave admin privileges in place for the application space (i.e. the firmware would run with full processor privileges). Devices could also leave networks exposed by allowing an attacker to gain easy entry to the device and then utilising its network connection to either gain internet access or local access (which could allow it to gain entry to servers and other devices on the same network).
Despite warnings from security specialists and others in the industry governing bodies around the world have begun to introduce regulations that describe how designers should remove features that leave their designs open to easy attack. So far, the majority of these regulations are more concerned with removing default passwords but as time progresses these may change to include more features such as mandatory encryption, on-device hardware security, and the need for security when the device is decommissioned. However, there is one emerging technology that may help to solve issues with IoT security; edge computing.
Currently, IoT devices gather data from their surroundings and stream this data to a cloud-based platform which in-turn can provide multiple features including data viewing, data learning, and data processing. For example, an advanced home automation system might have various IoT sensors around a property whose data is streamed to a cloud-based service that determines how environmental controls should be adjusted. This use of the cloud to perform data processing is often called cloud-computing and essentially means that the data processing is done remotely from the IoT device responsible for gathering that data. Edge computing, however, is where the IoT device itself is responsible for some proportion of data processing either partially or entirely. Early IoT devices would not be capable of edge computing due to the limitations of technology at the time but with the introduction of powerful microcontrollers at equivalent prices local IoT devices can start to process their own data.
Edge computing holds a lot of advantages over cloud-computing including security, latency, and reliability. Since edge computing devices transmit little data to a cloud-based system (if at all) sensitive data is less exposed to potential sources of attacks. The lack of transmission means that an attacker would need to gain direct entry to the device itself as opposed to performing a man-in-the-middle attack, an attack on the server itself, or spoofing the server. Keeping data local to a device also provides designers with more opportunity to protect the data as soon as it is gathered with the use of memory encryption as well as dedicated security hardware. Edge computing devices can also perform partial processing on sensitive data before sending it to a cloud-based system for further processing which can help obscure data and thereby reducing its usefulness to an attacker (i.e. a trained neural net is far less sensitive than visual data from a camera).
Processing data itself locally to a device also means that latency is significantly reduced which is highly beneficial in applications requiring fast results (such as self-driving cars). The ability to locally process the data also removes the need for a constant internet connection which helps to improve reliability of the design. Many areas globally still suffer from internet reliability and can also be subjected to large swings in internet speed. The use of edge computing helps to increase the available bandwidth of a local network which can improve other services such as local servers and other IoT devices and therefore increase the maximum number of devices on a single network (thereby allowing for more IoT devices to be integrated).
While the cost of micros has continued to fall while their capabilities have significantly increased they are still more expensive than cheaper microcontrollers making low-end micros more desirable for mass-produced devices. The introduction of regulations also makes it harder to use mid-range devices that have the processing capabilities needed for advanced features as they may lack hardware security that could leave them exposed. At the same time, the need for AI in modern products also further limits the choices for engineers who may need AI engines on their IoT devices to efficiently run neural networks.
Edge computing provides designers with a whole new paradigm of computing that can see low-latency, high reliability IoT devices that can combine the best features of cloud-computing with local processing. Hardware security features such as secure boot and root-of-trust will become key technologies for securing devices and the inclusion of AI engines will allow devices to perform the majority of data processing locally. But despite the many security advantages provided by edge computing designers still need to carefully consider how their device handles sensitive data, how it can potentially be used maliciously, and how they can help to not only protect the users but also contribute to the world stage in an ever more interconnected future.
Read the original:
Edge Computing The Future IoT Solution - Electropages
OnlyOffice review: create and collaborate with this feature-rich office solution review – TechRadar
OnlyOffice ticks a lot of boxes, and is built for collaboration and teamwork. If youre looking for a powerful Microsoft Office alternative for your business, this may be it. Read through our OnlyOffice review to see why we were so impressed with this cloud and server-based office suite.
We tested macOS and web versions, and had a streamlined experience on both. A main landing page presents you with your folders and documents, collaborative folders and tools, and cloud accounts, all with a sleek design.
Like many suites, OnlyOffice will look very familiar to those who have experience with Microsoft Office. You can easily edit text and add elements with the ribbon at the top, while a sidebar supports more advanced features like editing embedded chart data and customizing tables.
The interface is well-organized, and the HTML5 web app is impressively responsive: it really felt like using an on-disk program. The only notable limit we found regarded trackpad zooming: it doesnt work in Safari and is overly sensitive and slow to respond in Chrome.
Some plans (see below) enable business to customize the appearance, interface and function of the software at a deep level.
Before diving in, wed like to note that OnlyOffice supports the addition of premade and homemade plugins, which means that if a feature doesnt exist, you can create it yourself. Its compatible with all Office and OpenDocument filetypes.
Documents
OnlyOffice supports some of the richest text formatting weve seen, in addition to style creation and customization. Page layout options are comprehensive, with margins, custom page sizes, and even personalized watermarks. Columns are supported, but must all be the same size.
Lists creation is exemplary: hyphens and asterisks result in new lists; a huge range of icons is available; indents cycle through list styles; multilevel lists are supported and customizable; and formatting changes carry through list levels.
A references tab supports automatic Table of Contents creation and customizable footnotes. While OnlyOffice lacks any kind of citation manager, an EasyBib plugin exists, so subscribers can enjoy full integration. Find & Replace supports Replace All, but not finding styles.
Spreadsheets
Our opinion of the Spreadsheets app was mixed. On the one hand, its certainly powerful: there are lots of built-in formulas, plus support for filtering, Text to Columns and pivot tables. Cell formatting is rich, with customizable number, date, and currency formats. Finally, charts and graphs are easy to create from data and customize.
On the other hand, we found formula input limited. Suggestions appear as you type, but descriptions are available only on hover, and they disappear once the formula is selected. Similarly, argument hints could be clearer or provide examples. Next, error parsing fails to indicate specific problem elements, making it hard to tell whats gone wrong. Finally, #NAME and #VALUE errors give no information when selected, and error tracing is unsupported.
Presentations are straightforward and easy to use. Adding slides and elements and choosing highly customizable transitions worked intuitively, as did presenter mode. We did notice that its impossible to record the timing of a rehearsed slideshow, a feature which MS Office supports.
OnlyOffice is first and foremost a web app, and works incredibly well as such. We did find, however, that the iOS app is a bit cumbersome. This was surprising, given the polished look and smooth interface of the desktop and web app versions. None of the features are easily accessible, as they are with the ribbon, but rather hidden behind icons and menus that dont make their function immediately obvious. There is also no hand-writing support.
That being said, most of the functions are present, if a little difficult to find. For example, we were happy to see that embedded graphs worked just fine in word documents, even if editing data takes you to another screen.
Collaboration features are deeply integrated and one of the core functions of OnlyOffice, which is marketed primarily towards businesses looking for a streamlined company-wide solution. Files can be edited by multiple collaborators in real time or by syncing changes, and you can easily invite users or groups from within your network. The web and desktop app support version history, though this feature is regrettably absent from the iOS app.
Finally, cloud sharing is available with services like Google Drive and DropBox.
OnlyOffice offers four productsCloud Service, Enterprise, Integration, and Developerand the pricing scheme is complex.
Cloud Service provides cloud storage and access to the OnlyOffice suite for $5/user/month ($3 billed annually/$2 triennially). Storage starts at 20GB for 12 users and increases up to 500GB for 50 users. For teams of over 50, you must contact OnlyOffice for custom pricing.
Meanwhile, Enterprise gives you access to the office suite, plus other collaborative tools like email and calendars, on a private server. You also get enhanced security options and help with installation. There are three tiers, with the lowest priced starting at $1200/year for up to 50 simultaneous connections.
Integration is built to work with cloud services your company already uses, like Jira or Moodle. The Home Server edition costs a one-time payment of $99 for up to 10 users, while the Single Server edition costs $1100 for 50 users, and can be scaled according to need.
Finally, Developer enables you to build OnlyOffice into your own software or SaaS from the ground up, customizing it at the most basic level to fit your needs and your companys brand. For $1500 per server, you get 20 connections, with the price increasing depending on the number of connections.
If youre looking for the best office suite for your business, OnlyOffice ticks almost all the boxes. It already includes most office features and is highly customizable with powerful plug-in support. Thus, while we lamented certain missing functions, like intuitive error parsing or advanced find & replace, its theoretically possible to add these. Finally, at $40/month for 10 users with 100GB cloud storage, its reasonably priced. All in all, this is a great solution for business of all sizes.
OnlyOffice is highly customizable and great for businesses, but does require some setting up. If youre looking for a quicker, plug-and-play solution, iWork and OfficeSuite are both good options.
If your business is Mac-based, iWork is the way to go. Its free and supports collaboration and handwriting out-of-the-box. For smaller teams, OfficeSuites $49.99 Group plan supports up to five users, with highly functional spreadsheets for data analysis.
To see how OnlyOffice fares against the competition, check out our guide to the Best Microsoft Office alternatives.
Read more:
OnlyOffice review: create and collaborate with this feature-rich office solution review - TechRadar
Avoiding DR and High Availability Pitfalls in the Hybrid Cloud – Computer Business Review
Add to favorites
The SLAs only guarantee the equivalent of dial tone for the physical server or virtual machine
The private cloud remains the best choice for many applications for a variety of reasons, while the public cloud has become a more cost-effective choice for others, writes David Bermingham, Technical Evangelist at SIOS Technology.
This split has resulted intentionally or not in the vast majority of organizations now having a hybrid cloud. But there are many different ways to leverage the versatility and agility afforded in a hybrid cloud environment, especially when it comes to the different high availability and disaster recovery protections needed for different applications.
This examines the hybrid cloud from the perspective of high availability (HA) and disaster recovery (DR), and provides some practical suggestions for avoiding potential pitfalls.
The carrier-class infrastructure implemented by cloud service providers (CSPs) gives the public cloud a resiliency that is far superior to what could be justified for a single enterprise.
Redundancies within every data center, with multiple data centers in every region and multiple regions around the globe give the cloud unprecedented versatility, scalability and reliability. But failures can and do occur, and some of these failures cause downtime at the application level for customers who have not made special provisions to assure high availability.
While all CSPs define downtime somewhat differently, all exclude certain causes of downtime at the application level. In effect, the service level agreements (SLAs) only guarantee the equivalent of dial tone for the physical server or virtual machine (VM), or specifically, that at least one instance will have connectivity to the external network if two or more instances are deployed across different availability zones.
Here are just a few examples of some common causes of downtime excluded from SLAs:
It is reasonable, of course, for CSPs to exclude these and other causes of downtime that are beyond their control. It would be irresponsible, however, for IT professionals to use these exclusions as excuses for not providing adequate HA and/or DR protections for critical applications.
Properly leveraging the clouds resilient infrastructure requires understanding some important differences between failures and disasters because these differences have a direct impact on HA and DR configurations. Failures are short in duration and small in scale, affecting a single server or rack, or the power or cooling in a single datacenter. Disasters have more enduring and more widespread impacts, potentially affecting multiple data centers in ways that preclude rapid recovery.
The most consequential effect involves the location of the redundant resources (systems, software and data), which can be local on a Local Area Network for recovering from a localized failure. By contrast, the redundant resources required to recover from a widespread disaster must span a Wide Area Network.
For database applications that require high transactional throughput performance, the ability to replicate the active instances data synchronously across the LAN enables the standby instance to be hot and ready to take over immediately in the event of a failure. Such rapid, automatic recovery should be the goal of all HA provisions.
Data is normally replicated asynchronously in DR configurations to prevent the WANs latency from adversely impacting on the throughput performance in the active instance. This means that updates being made to the standby instance always get made after those being made to the active instance, making the standby warm and resulting in an unavoidable delay when using a manual recovery process.
All three major CSPs accommodate these differences with redundancies both within and across data centers. Of particular interest is the variously named availability zone that makes it possible to combine the synchronous replication available on a LAN with the geographical separation afforded by the WAN. The zones exist in separate data centers that are interconnected via a low-latency, high-throughput network to facilitate synchronous data replication. With latencies around one millisecond, the use of multi-zone configurations has become a best practice for HA.
IT departments that run applications on Windows Server have long depended on Windows Server Failover Clustering (WSFC) to provide high availability. But WSFC requires a storage area network (SAN) or some other form of shared storage, which is not available in the public cloud. Microsoft addressed this issue in Windows Server 2016 Datacenter Edition and SQL Server 2016 with the introduction of Storage Spaces Direct. But S2D has its own limitations; most notably an inability to span multiple availability zones, making it unsuitable for HA needs.
The lack of shared storage in the cloud has led to the advent of purpose-built failover clustering solutions capable of operating in private, public and hybrid cloud environments. These application-agnostic solutions facilitate real-time data replication and continuous monitoring capable of detecting failures at the application or database level, thereby filling the gap in the dial tone nature of the CSPs SLAs. Versions available for Windows Server normally integrate seamlessly with WSFC, while versions for Linux provide their own SANless failover clustering capability. Both versions normally make it possible to configure different failover/failback policies for different applications.
More information about SANless failover clustering is available inEnsure High Availability for SQL Server on Amazon Web Services. While this article is specific to AWS, the clusters basic operation is the same in the Google and Azure clouds.
It is worth noting that hypervisors also provide their own high availability features to facilitate a reasonably quick recovery from failures at the host level. But they do nothing to protect against failures of the VM, its operating system or the application running in it. Just like the cloud itself, these features only assure dial tone to a VM.
For DR, all CSPs have ways to span multiple regions to afford protection against widespread disasters that could affect multiple zones. Some of these offerings fall into the category of DIY (Do-It-Yourself) DR guided by templates, cookbooks and other tools. DIY DR might be able to leverage the backups and snapshots routinely made for all applications. But neither backups nor snapshots provide the continuous, real-time data replication needed for HA. For databases, mirroring or log shipping both provide more up-to-date versions of the database or transaction logs, respectively, but these still lag the data in the active instance owing to the best practice of having the standby DR instance located across the WAN in another region.
Microsoft and Amazon now have managed DR as a Service (DRaaS) offerings: Azure Site Recovery and CloudEndure Disaster Recovery, respectively. These services support hybrid cloud configurations and are reasonably priced. But they are unable to replicate HA clusters and normally have bandwidth limitations that may preclude their use for some applications.
One common use case for a hybrid cloud is to have the public cloud provide DR protection for applications running in the private cloud. This form of DR protection is ideal for enterprises that have only a single datacenter and it can be used for all applications, whether they have HA protection or not. In the enterprise datacenter, it is possible to have a SAN or other form of shared storage, enabling the use of traditional failover clustering for HA protection. But given the high cost of a SAN, many organizations are now choosing to use a SANless failover clustering solution.
The diagram below shows one possible way to configure a hybrid cloud for HA/DR protection. The use of SANless failover clustering for both HA and DR has the additional benefit of providing a single solution to simplify management. Note the use of separate racks in the enterprise data center to provide additional resiliency, along with the use of a remote region in the public cloud to afford better protection against widespread disasters.
This hybrid HA/DR configuration is ideal for enterprises with only a single datacenter.
This configuration can also be flipped with the HA cluster in the cloud and the DR instance in the enterprise datacenter. While it would also be possible and even preferable to use the cloud for both HA and DR protection, this hybrid configuration does at least provide some level of comfort to risk-averse executives reluctant to commit 100% to the cloud. Note how using SANless failover clustering software makes it easy to lift and shift HA configurations when migrating from the private to a public cloud.
With multiple availability zones and regions spanning the globe, all three major CSPs have infrastructure that is eminently capable of providing carrier-class HA/DR protection for enterprise applications. And with a SANless failover clustering solution, such carrier-class high availability need not mean paying a carrier-like high cost. Because SANless failover clustering software makes effective and efficient use of the clouds compute, storage and networking resources, while also being easy to implement and operate, these solutions minimize ongoing costs, resulting in robust HA and DR protections being more affordable than ever before.
David Bermingham is Technical Evangelist at SIOS Technology. He is recognized within the technology community as a high-availability expert and has been honored to be elected a Microsoft MVP for the past eight years: six years as a Cluster MVP and two years as a Cloud and Datacenter Management MVP. David holds numerous technical certifications and has more than thirty years of IT experience, including in finance, healthcare and education.
View original post here:
Avoiding DR and High Availability Pitfalls in the Hybrid Cloud - Computer Business Review
From server room to boardroom the demands of today’s CIO – The Union Journal
The significance of IT in organisation today is such that those holding the reins to it are some of the most substantial leaders in any type of company.
A companys safety and security, client experience, item advancement, affordable distinction, organisation knowledge currently drops, or at the very least overlaps, right into the CIO or their matchings remit. No stress after that.
The hilly job of the CIO in 2020 can be a unrecognized one CIOs are usually anticipated to supply both day-to- day outcomes while intending very closely for the firms future.
At the very same time, they might have to please both inner stakeholders (typically by ways of conserving prices and also boosting efficiency with remarkable technical tasks) along with the end customers (by supplying smooth client experience and also elevating marginal client responses resolution flags).
This fragile harmonizing act can be greatly exhausting on any type of specific, not to mention the CIO that is frequently under the cosh to introduce, in advance of the nearby rival. In reality, the demands can be so extreme that the CIOs functioning life expectancy is one of the quickest in the C-suite, balancing simply 4.3 years.
These significant demands and also assumptions, however, are weakening or even more precisely, underestimating the transformative impact that the CIO really possesses in todays business.
No much longer is the CIO essentially a pietistic IT supervisor, charged with handling and also keeping the firms IT framework, information streams, and also web servers.
Instead, as IT has actually come to be a main part to organisation, CIOs have (some of them, possibly, unwillingly) transitioned to understanding, and after that combining organisation and also modern technology objectives to drive the advancement and also development of their company in todays electronic economic climate.
In a function that has actually developed as rapidly as IT has within the last years, the CIO should currently sweep in between the advancement group and also the c-suite, be dexterous and also fast to respond to the range of one-of-a-kind obstacles, and also discuss financial investment in transformational modern technology.
Besides connecting the technology divide on part of the company inside, CIOs should additionally be hip to to the end individuals and also the optimization of their experience.
CIOs are typically heading the electronic movement of customer communications with the firm throughout numerous systems like the internet, applications, social media sites making these communications much faster, much more appealing, along with even more noticeable to participants of the sales and also advertising funnels, so the firm realizes of customer assumptions.
A vital device in the CIOs significant toolbox is the leveraging of third-party efficiency devices and also solutions, such as moving physical work and also handling treatments on the internet right into cloud computer settings.
As well as discussing for financial investment in the right devices and also modern technologies to maintain organisation procedures ticking efficiently right into an electronic future, CIOs should additionally prepare to defend ability spending plans, so the right professionals can be onboarded, the CIO can unload specific jobs, and also return from those technology financial investments can absolutely be recognized.
That might consist of cloud professionals to lead cloud movements, information researchers to look after campaigns with AI and also analytics, and also also a marked gatekeeper to guarantee the organisation is well protected daily.
In such an extensive function which is so fast to develop, management abilities are currently as useful as technological expertise, and also possibly tactical delegation can assist them press an additional couple of months out of that balance period.
Post Views: 67
Here is the original post:
From server room to boardroom the demands of today's CIO - The Union Journal
U.S. Census Goes Digital With The iPhone 8 – The Mac Observer
Its census year in the U.S., but this time around its going to be different. Each enumerator tasked with getting the data is to be handed an iPhone 8 instead of a pen and paper. CNet looked into how it is all going to work, and the risks involved.
In an effort to make the door-to-door process, which is the most laborious and expensive part of the census, faster and more efficient, the bureau is arming 500,000 enumerators with the Apple iPhone 8. But as the census goes mobile, instantaneously beaming respondents answers to data centers and cloud servers, it opens itself up to those who may want to access or manipulate such valuable information. The stakes to pull off a census have always been high, but with this years adoption of new technological methods, the pressure to succeed is even higher.
Check It Out: U.S. Census Goes Digital With The iPhone 8
Related
Add a Comment
Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account
More here:
U.S. Census Goes Digital With The iPhone 8 - The Mac Observer