Page 2,171«..1020..2,1702,1712,1722,173..2,1802,190..»

Southern Door teacher Meachem finalist for national award – Green Bay Press Gazette

From Staff Reports| USA TODAY NETWORK-Wisconsin

BRUSSELS - Southern Door Elementary School teacher Jessica Meacham is a finalist for the2022 Presidential Awards for Excellence in Mathematics and Science Teaching.

The award isconsidered the highest honor given by the U.S. government for science, technology, engineering, mathematics, and/or computer science teachers.

Meacham,a STEAM (science, technology, engineering, arts and mathematics) teacher for students in 4K through fifth grade,has taught the past 18 years at Southern Door as a primary teacher. She was named Wisconsins Rural Teacher of the Year in 2013 and National Rural Teacher of the Year in 2014.

Mrs. Meacham makes the ordinary extraordinary, Cory Vandertie, the elementary principal, said last year. Her incredible passion for teaching and learning is contagious, and she inspires her colleagues and students to never settle for less than their best.

Meacham is one of four Wisconsin finalists. The othersare:

MORE: Jacque, Kitchens rip Evers for 'stalling' Potawatomi State Park tower restoration

FOR MORE DOOR COUNTY NEWS:Check out our homepage

One awardee in mathematics and one awardee in science may receive a $10,000 award from the National Science Foundation and professional development opportunities, along with being honored at an award ceremony in Washington, D.C.

Go here to see the original:

Southern Door teacher Meachem finalist for national award - Green Bay Press Gazette

Read More..

She’s all the buzz: Local high schooler to compete at international science fair – Wyoming Tribune

Country

United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe

Read the original:

She's all the buzz: Local high schooler to compete at international science fair - Wyoming Tribune

Read More..

What Stanfords recent AI conference reveals about the state of AI accountability – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

As AI adoption continues to ramp up exponentially, so is the discussion around and concern for accountable AI.

While tech leaders and field researchers understand the importance of developing AI that is ethical, safe and inclusive, they still grapple with issues around regulatory frameworks and concepts of ethics washing or ethics shirking that diminish accountability.

Perhaps most importantly, the concept is not yet clearly defined. While many sets of suggested guidelines and tools exist from the U.S. National Institute of Standards and Technologys Artificial Intelligence Risk Management Framework to the European Commissions Expert Group on AI, for example they are not cohesive and are very often vague and overly complex.

As noted by Liz OSullivan, CEO of blockchain technology software company Parity, We are going to be the ones to teach our concepts of morality. We cant just rely on this emerging from nowhere because it simply wont.

OSullivan was one of several panelists to speak on the topic of accountable AI at the Stanford University Human-Centered Artificial Intelligence (HAI) 2022 Spring Conference this week. The HAI was founded in 2019 to advance AI research, education, policy and practice to improve the human condition, and this years conference focused on key advances in AI.

Topics included accountable AI, foundation models and the physical/simulated world, with panels moderated by Fei-Fei Li and Christopher Manning. Li is inaugural Sequoia Professor in Stanfords computer cience department and codirector of HAI. Manning is the inaugural Thomas M. Siebel Professor in machine learning and is also a professor of linguistics and computer science at Stanford, as well as the associate director of HAI.

Specifically, regarding accountable AI, panelists discussed advances and challenges related to algorithmic recourse, building a responsible data economy, computing the wording and conception of privacy and regulatory frameworks, as well as tackling overarching issues of bias.

Predictive models are increasingly being used in high-stakes decision-making for example, loan approvals.

But like humans, models can be biased, said Himabindu Lakkaraju, assistant professor at Harvards business school and department of computer science (affiliate) and Harvard University.

As a means to de-bias modeling, there has been growing interest in post hoc techniques that provide recourse to individuals who have been denied loans. However, these techniques generate recourses under the assumption that the underlying predictive model does not change. In practice, models are often regularly updated for a variety of reasons such as dataset shifts thereby rendering previously prescribed recourses ineffective, she said.

In addressing this, she and fellow researchers have looked at instances in which recourse is not valid, useful, or does not result in a positive outcome for the affected party such as general algorithmic issues.

They proposed a framework, Robust Algorithmic Recourse (ROAR), which uses adversarial machine learning (ML) for data augmentation to generate more robust models. They describe it as the first known solution to the problem. Their detailed theoretical analysis also underscored the importance of constructing recourses that are robust to model shifts; otherwise, additional costs can be incurred, she explained.

As part of their process, the researchers carried out a survey with customers who applied for bank loans over the previous year. The overwhelming majority of participants said algorithmic recourse would be extremely useful for them. However, 83% of respondents said they would never do business with a bank again if the bank provided recourse to them and it was not correct.

Therefore, Lakkaraju said, If we provide a recourse to somebody, we better make sure that it is really correct and we are going to hold on that promise.

Another panelist, Dawn Song, addressed overarching concerns of the data economy and establishing responsible AI and machine learning (ML).

AI deep learning has been making huge progress, said the professor in the department of electrical engineering and computer science at the University of California at Berkeley but along with that, she emphasized, it is essential to ensure the evolution of the responsible AI concept.

Data is the key driver of AI and ML, but much of this exponentially growing data is sensitive and handling sensitive data has posed numerous challenges.

Individuals have lost control of how their data is being used, Song said. User data is sold without their awareness or consent, or it is acquired during large-scale data breaches. As a result, companies leave valuable data sitting in data silos and dont use it due to privacy concerns.

There are many challenges in developing a responsible data economy, she added. There is a natural tension between utility and privacy.

To establish and enforce data rights and develop a framework for a responsible data economy, we cannot copy concepts and frameworks used in the analog world, Song said. Traditional methods rely on randomizing and anonymizing data, which is insufficient in protecting data privacy.

New technical solutions can provide data protection in use, she explained. Some examples include secure computing technologies and cryptography, as well as the training of differential language models.

Songs work in this area has involved developing programming rewriting techniques and the development of decision records that ensure compliance with privacy regulations such as General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

As we move forward in the digital age, these issues will only become more and more severe, Song said, to the extent that they will hinder societal progress and undermine human value and fundamental rights. Hence, theres an urgent need for developing a framework for a responsible data economy.

Its true that large enterprises and corporations are taking steps in that direction, OSullivan emphasized. As a whole, they are being proactive about addressing ethical quandaries and dilemmas and tackling questions of making AI responsible and fair.

However, the most common misconception from large corporations is that theyve developed procedures on how to de-bias, according to OSullivan, the self-described serial entrepreneur and expert in fair algorithms, surveillance and AI.

In reality, many companies try to ethics wash with [a] simple solution that may not actually go all that far, OSullivan said. Oftentimes, redacting training data for toxicity is referred to as negatively impacting freedom of speech.

She also posed the question: How can we sufficiently manage risks on models that have impossible large complexity?

With computer vision models and large language models, the notion of de-biasing something is really an infinite task, she said, also noting the difficulties in defining bias in language, which is inherently biased.

I dont think we have consensus on this at all, she said.

Still, she ended on a positive, noting that the field of accountable AI is popular and growing every day and that organizations and researchers are making progress when it comes to definitions, tools and frameworks.

In many cases, the right people are at the helm, OSullivan said. It will be very exciting to see how things progress over the next couple of years.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Read the original:

What Stanfords recent AI conference reveals about the state of AI accountability - VentureBeat

Read More..

Linguistics Faculty Receive NSF Grant in Computational Phonology – UMass News and Media Relations

Joe Pater

Joe Pater, professor and chair of the linguistics department, and Gaja Jarosz, associate professor of linguistics, have been awarded National Science Foundation (NSF) research grant on Representing and learning stress: Grammatical constraints and neural networks in the amount of $386,226. This three-year research grant will study the learnability of a wide range of word stress patterns, using two general approaches. The goal of the projectwill be to develop grammar and learning systems that can cope with a broader range of typological data than current models, and that can handle more details of individual languages.

According to the project summary,"learning stress involves learning hidden structure, parts of the representation [of language]that are not present in the observed data and that must be inferred by the learner."The research will draw on the theories and methods of both linguistics and computer science to study the learning of word stress, the pattern of relative prominence of the syllables in a word by applyinglearning methods from computer science to find new evidence to distinguish competing linguistic theories. It will also examine systems of language representation that have been developed in computer science and have received relatively little attention by linguists (neural networks).

Gaja Jarosz

The research will engage both undergraduate and graduate linguistics students at UMass Amherst. In addition, the project summary notes that "linguistics has a much higher proportion of female students than computer science, and this project aims to address gender imbalance in STEM."

This is the fourth NSF grant on which Pater has served as Principal Investigator (PI). At the conclusion of this grant, his research will have received about nearly two decades of continuous funding, with a total of$1,428,866.00.

Read this article:

Linguistics Faculty Receive NSF Grant in Computational Phonology - UMass News and Media Relations

Read More..

In Memoriam: Life Trustee Anthony YC Yeh G’49, an Innovator at the Intersection of Engineering and Business – Syracuse University News

Anthony Y.C. Yeh

After earning a masters degree in mechanical engineering from the College of Engineering and Computer Science, Anthony Y.C. Yeh G49 returned to his native country determined to help the throngs of refugees fleeing into Hong Kong from mainland China. He had a brilliant mind, an innovative spirit and a keen understanding of the intersection of engineering and business that helped him build an international company that supplied custom-made carpets to Queen Elizabeth II, President John F. Kennedy, the king of Thailand and other notables.

While building a global business, Yeh never lost his fondness for Syracuse University. He had been active on campus as a graduate student, serving as president of the Chinese Students Club and a junior member of the American Society of Mechanical Engineers. He was first elected to the Board of Trustees in 1994, serving six years as a voting trustee. He was recognized for his business acumen with the Arents Award in 1992, the highest honor recognizing alumni for their professional achievements. Yeh passed away Feb. 24, 2022, just two months shy of his 99th birthday.

Tony was instrumental in building relationships between Syracuse and China, ensuring that Chinese and American students understood the importance and nuances of the global economy, says Board Chair Kathleen A. Walters 73. He forged a partnership between the Maxwell School of Citizenship and Public Affairs and the China National School of Administration (CNSA) that has continued to thrive for nearly three decades.

Yehs connection with Syracuse University began after receiving a bachelors in science from the Henry Lester Institute of Technical Education in Shanghai, China, in 1945. While working on a masters degree from Syracuse, he served as a technician and engineer at the U.S. Army Air Forces Kiangwen Air Base in Shanghai and on board Chinese Maritime Customs patrol ships. His career in the cotton industry began after he received a masters degree. Together with colleagues from a Hong Kong-based cotton producer, he launched a carpet company in 1956 in Hong Kong with the expressed purpose of supplying jobs for Chinese refugees.

Yeh served as managing director of Tai Ping Carpets and used his engineering prowess to create and patent an electric hand-held tool that tufted the elaborate carpets 100 times faster than hand tying. According to the company, Yeh developed the techniques in hand tufting carpets, leading to the successful commercialization of the product. He helped form alliances with other Asian partners and established international sales subsidiaries, leading to the formation of the world-recognized Tai Ping custom carpet group of today. Tai Ping has showrooms in New York, Los Angeles, London, Paris, Milan, Hong Kong and Shanghai.

Yeh was also a partner in Suntec City in Singapore, a major mixed-use development in Marina Centre that established Singapore as an international convention and exhibition center.

Together with his wife, Sylvia, Yeh established the Anthony Y.C. Yeh Endowed Beijing Scholarship and the Anthony Y.C. Yeh Endowed Undergraduate Scholarship. They have supported other initiatives in the College of Arts and Sciences, College of Engineering and Computer Science, the Maxwell School of Citizenship and Public Affairs, the College of Visual and Performing Arts, the Goldstein and Alumni Faculty Center, the Hildegarde and J. Myer Schine Student Center, Syracuse Abroad and alumni relations.

Notably, as part of Yehs efforts to ensure a global experience for Chinese and American students, he served as a member of the Council of the Chinese University of Hong Kong (CUHK). CUHK is a comprehensive research university with offerings in English, Cantonese and Putonghua (Mandarin). It is a world partner with Syracuse and requires an application from Syracuse Abroad and CUHK. The Chinese University of Hong Kong, Shenzhen, program is available to Martin J. Whitman School of Management undergraduates.

Yeh is survived by his wife, Sylvia; two sons, Kent and Russell Yeh; two daughters, Lucienne Cheng and Monique Poon 74 (David B. Falk College of Sport and Human Dynamics); and eleven grandchildren and six great-grandchildren.

About Syracuse University

Syracuse University is a private research university that advances knowledge across disciplines to drive breakthrough discoveries and breakout leadership. Our collection of 13 schools and colleges with over 200 customizable majorsclosesthe gap between education and action, so students can take on the world. In and beyond the classroom, we connect people, perspectives and practices to solve interconnected challenges with interdisciplinary approaches. Together,werea powerful community that moves ideas, individuals and impact beyond whats possible.

Follow this link:

In Memoriam: Life Trustee Anthony YC Yeh G'49, an Innovator at the Intersection of Engineering and Business - Syracuse University News

Read More..

Israel Prize winner to donate award money to groups fighting to end the occupation – Middle East Monitor

Israel Prize winner in mathematics and computer science Professor Oded Goldreich will be donating his prize money to five left-wing human rights organisations.

A professor of computer science at the Faculty of Mathematics and Computer Science of Weizmann Institute of Science, Goldreich will divide the $23,300 among five organisations he said are working toward the goals of fighting to end the Israeli occupation, genuine equality for all of the country's inhabitants and for social justice.

The groups are Breaking the Silence, Standing Together, Kav LaOved, B'Tselem and the Legal Centre for Arab Minority Rights in Israel (Adalah).

Goldreich was awarded the prize on Monday for his work on computational complexity theory.

Addressing Israel's occupation of Palestine, he said:

I would like to say something a bit political. The story is not complete without mentioning the price paid by another nation for our revival, and our moral commitment to try as best as we can to compensate and not oppress the other nation.

He added: "We are, of course, doing the exact opposite. Personally, it darkens my life."

A petition against handing the prize to the professor was set up by the Israeli Education Minister, Yifat Shasha-Biton, which was overruled last month, by the High Court of Justice.

In response, she said: "The fact that professor Goldreich decided to donate the Israel Prize to organisations that work against Israel Defence Forces soldiers who risk their lives for us after having called to boycott an Israeli academic institution proves that he is a provocateur who cannot engage in academia with clean hands, and does not deserve a national prize."

She added that "the High Court's decision to award him the prize in opposition to my opinion was wrong."

READ: The billion dollar deal that made Google and Amazon partners in the Israeli occupation of Palestine

Meanwhile, the rights groups receiving the prize money praised Goldreich's decision to support the rights of Palestinians.

In a joint statement, they said: "This is also our opportunity to express our appreciation for that fact that throughout his remarkable academic career, professor Goldreich has used his name, status and achievements to promote battles for equality, justice, democracy and human rights."

The statement added that it is due to his "determination to promote these values which certain politicians are determined to mock," that he was initially denied the prize.

"We are proud to pledge that the former and present education ministers will be agitated by our actions for the benefit of Israelis and Palestinians with the generosity of professor Goldreich."

See more here:

Israel Prize winner to donate award money to groups fighting to end the occupation - Middle East Monitor

Read More..

What to do when the servers go down – Raconteur

Nationwide customers faced repeated disruption to their banking services around Christmas, unable to access their funds or pay their bills. While the building society described the outage as a technical issue, some experts identified a serverfailure.

The disruption demonstrated just how much damage a tech crash can cause and the salient importance of servers for smooth business operations, with millions of users unable to receive or makepayments.

Whatever the causes of a server crash ranging from simple hardware failure and power outages to software glitches, cyberattacks and natural disasters the consequences can be catastrophic. Businesses, big and small, rely on connectivity; in a digital age, its the lifeblood of commerce. Consequently, organisations have become increasingly reliant onservers.

Just one hour of downtime can cost anything from thousands to hundreds of thousands ofpounds

As screens go blank, digital and human connections are cut. The afflicted organisation loses productivity, orders and profits, while customers are affected, causing reputational damage and possible loss of future business. In addition, if private data is lost, regulatory fines and penalties can result, as well as class-action lawsuits.

Servers support essential connections, which facilitate business operations including interaction with staff and customers. Their importance means more and more companies use a network of cloudservers.

These servers now play a vital role in business technology. They provide a central repository, receiving, storing, retrieving and sending data, ensuring all team members have timely access to the information theyneed.

Web, email and file servers, to name just a few, are essential for employees, teams and systems to perform the tasks that make up their jobs. The pandemic and resulting shift to remote and hybrid working have necessarily accelerated data-centric, cloud-based digitalisation, so businesses have become increasingly dependent on the uninterrupted operation of theirservers.

But have servers a computer with advanced hardware running a server program become an Achilles heel? Essential to business operations, what happens when servers crash? How can an organisation recover quickly and get back towork?

Azeem Javed is a consultant at Creative Networks, managed IT and telecoms specialists. He says that encapsulating backup as part of a business continuity and disaster recovery (BCDR) strategy is critical for all businesses, ensuring continuity of their operations and suitable recovery. This should extend to all aspects ofbusiness.

Contingency planning and a system backup strategy installing locally based or remote backup servers or backup to an external hard drive and disaster recovery software can certainly help the chief technology officer sleep more soundly at night. If a business has alternate backups for its files, it can quickly bounce back and resume operations.

A full backup is a complete copy of an organisations data assets. This process requires all files to be backed up into a single version. However, the dataset should be copied in its entirety and stored in a separate location, away from theserver.

Such an offsite backup, which can be accessed, restored or administered from a different location, guarantees high-level security and peace of mind as it allows data storage offsite andonline.

If your data is mission-critical to your business, backup servers are absolutely vital to ensure seamless business continuity and to avoid data loss, says Jake Madders, a director at Hyve, a managed cloud hostingprovider.

We now live in an always-on world, where just one hour of downtime can cost anything from thousands to hundreds of thousands of pounds, depending on the size of a company. Time ismoney.

Irrespective of the location of the server, a BCDR plan is essential for the worst-case scenario.

The pandemic has forced companies to realise that being prepared for even the most unlikely situation can no longer be treated as an optional part of business planning, says Madders. While it might seem difficult to measure the return on investment of a disaster recovery solution, because its a precautionary feature that ideally would never need to be used, it shouldnt be seen as a luxury add-on service solely for larger companies.It should be a fundamental part of every businesss ITstrategy.

A disaster recovery plan is a documented, structured approach that describes how an organisation can quickly resume work after an unplanned incident. It is an essential part of a business continuity plan and should be applied to the aspects of the operation that depend on a functioning IT infrastructure.

The step-by-step plan consists of precautions to minimise the effects of a disaster so the organisation can continue to operate or quickly resume mission-critical functions. Typically, disaster recovery planning involves an analysis of business processes and continuity needs.

Before generating a detailed plan, an organisation should perform a business impact analysis and risk analysis, and establish recovery objectives.

All strategies should align with the organisations goals. Once a business continuity strategy has been developed and approved, it can be translated into a disaster recovery plan, with an incident response team and list of important contacts.

The plan should be reviewed by management, tested, audited and regularly updated. It should be substantiated through testing, which identifies deficiencies and provides opportunities to fix problems before a crash occurs. Additionally, its important for businesses to monitor and protect their servers with software that can flag up potential problems.

And before calling in the tech experts, there are a few basic housekeeping tips that can lower the possibility of servers crashing in the first place. Prevention measures include keeping the server room isolated and cool with air conditioning. It should also be clean because dust can cause overheating.

In-house tech staff may be able to troubleshoot a server failure, but more complex issues could require outside help. This means that adequate training of tech staffers in how to deal with a failed server in the first instance is a good investment, as is maintaining a working relationship with an external IT specialist.

Of course, if the server is in a remote data centre, the organisation is at the mercy of the good practice of an outside agency and reliant on their speedy action to get systems back up and running so choose your provider carefully.

Read the original:
What to do when the servers go down - Raconteur

Read More..

Google Distributed Cloud: Who Will It Benefit? – ITPro Today

Wouldn't it be great if you could take a public cloud platform like Google Cloud and deploy its services in your own data center, or even on edge devices?

Well, you can, using Google Distributed Cloud, one of the newest offerings in Google's cloud services portfolio.

Related: Will Hybrid Cloud Save You Money? Predict Your Hybrid Cloud Costs

Here's how Google Distributed Cloud works, which use cases it targets, and why you may (or may not) want to use it as part of an edge or hybrid cloud strategy.

What Is Google Distributed Cloud?

Related: The Pros and Cons of Kubernetes-Based Hybrid Cloud

Google Distributed Cloud is a suite of cloud products from Google designed primarily to support the deployment of public cloud services at the "edge."

Thus, Google Distributed Cloud isn't a specific platform or service as much as it's a set of various tools and services, which you can use in a variety of ways.

Most of the services built into Google Distributed Cloud come from Google's standard public cloud platform (such as Anthos), so there's nothing really brand-new about Google Distributed Cloud from a technical perspective. It's mostly the way that Google is packaging the services to enable edge and hybrid cloud use cases that makes Google Distributed Cloud unique.

Google announced its Distributed Cloud portfolio in October 2021. The offering currently remains in preview mode, and there have been few real-world deployments of the platform to date. A deployment by Bell Canada, announced in February 2022, is one of the first.

Google Distributed Cloud works by allowing users to extend public cloud services that are hosted on Google Cloud Platform to private servers, internet of things (IoT) devices, or other infrastructure. In other words, you can use Distributed Cloud to manage infrastructure that you own as opposed to infrastructure owned by a cloud provider like Google using many of the same tools and services that Google makes available to its public cloud customers.

In this way, Distributed Cloud is similar to offerings such as AWS Outposts and Azure Arc, which also extend public cloud functionality into private infrastructure.

Currently, Google Distributed Cloud is designed to run on four types of infrastructure setups:

This means Google Distributed Cloud can operate on basically any infrastructure including conventional data centers and less orthodox environments, like networks of IoT devices.

If Google Distributed Cloud sounds like a hybrid cloud platform, it's because it basically is. Extending public cloud services like those of Google Cloud into private infrastructure would certainly qualify as a hybrid cloud deployment by most definitions.

Notably, however, Google is not calling the platform a hybrid cloud solution. Instead, Google is using language that centers on "edge" and "distributed" infrastructure.

That's probably because Google already markets Anthos (which, again, is integrated into the Distributed Cloud portfolio but which exists as a stand-alone product, too) as its main hybrid cloud solution.

Since Distributed Cloud is also based in part on Anthos, you could argue that the main difference between Distributed Cloud and a hybrid cloud platform is marketing and branding, not technology. And indeed, to a significant extent, Distributed Cloud seems to be a reflection of Google's efforts to position itself as a leader in the edge computing market above all else.

It's understandable why Google would choose to brand Distributed Cloud as something different from a hybrid cloud platform, even if it's technically really not that different. With Distributed Cloud, Google is in a stronger position to cater to use cases like running network functions on telco infrastructure or managing edge IoT devices deployments that aren't usually the focus of conventional hybrid cloud platforms.

Ultimately, Google Distributed Cloud is likely to become a product that is very important in certain narrow niches, but that most companies won't use.

In verticals such as telco, or for businesses with large IoT infrastructures, Google Distributed Cloud offers an easy way of managing large, distributed networks of devices. It's not the only solution of its kind; you could also manage distributed infrastructures using most Kubernetes distributions, for example, or via proprietary services like Azure IoT Edge. But the fact that Google Distributed Cloud is based on Google Cloud services will give it an advantage among customers who are already invested in the Google Cloud ecosystem.

That said, companies that just want to run a conventional hybrid cloud meaning one that extends public cloud services to private servers or data centers, without edge infrastructure in the mix aren't likely to benefit from Google Distributed Cloud. They should choose a more traditional hybrid cloud platform, like Anthos or a similar offering from a different public cloud provider.

Read the original here:
Google Distributed Cloud: Who Will It Benefit? - ITPro Today

Read More..

Is Cloud Computing Worth the Cost? | eWEEK – eWeek

IT costs are complex and difficult to figure out. First consider traditional IT, better known as on-premises. On one hand you have the cost of the hardware, the software, and the data center space. We all know thats just the beginning. There are operating and administrative costs, including the cost of the many humans needed to keep your infrastructure running. Also factor in the cost of setting up disaster recovery systems, power and networking costs.

In contrast, cloud computing costs can be easier to figure out because they are mostly fully loaded. That is, cloud invoices include all the costs listed above and you pay only for usage fees. Cloud providers charge service fees based upon the minutes you use or the resources you leverage such as storage, compute, and networking.

When its time to compare costs, cloud computing and traditional computing are like apples and oranges. As cloud computing became a more viable solution for many enterprises over the last 12-15 years, we improved our ability to provide accurate cost metrics to evaluate the true cost of each option and make better decisions about which type of computing to use.

Many now believe that traditional on-premises IT resources such as storage and compute servers, power supplies, networking equipment come in a distant second to public cloud computing services. However, in certain cases, there are still valid reasons to use traditional IT resources, and many of those reasons relate to costs.

Keep in mind, you must be creative and thorough to consider the true value of each. While cloud computing might initially appear more expensive, the business values of agility, speed, and increased innovation may boost the true value of cloud over and above traditional on-premises solutions.

Also see: Top Cloud Companies

Cloud computing is not always cheaper than traditional on-premises systems. The it depends answer that consultants often give reveals the complex reality of cloud costs versus traditional hardware and software for specific enterprises with specific goals.

Further confounding the issue is that prices for traditional IT resources such as hard disk drives (HDDs) fell over the last 10 years, with prices for solid state drives (SSDs) close behind. Most experts expect SSD prices to be lower than HDDs in the very near future. At the same time, cloud storage prices did not drop by the same degree, and some providers storage prices may even creep up.

But again, you cant do a straight comparison of cloud costs versus traditional IT resources for items such as storage and compute. As we mentioned above, there are many hidden costs that are part of traditional hardware and software ownership.

Whats more, you must consider the soft values of cloud. Soft cost business values are much more difficult to calculate. Soft values include clouds ability to make it easier for the business to innovate with quick access to higher tech solutions such as AI, containers, advanced data analytics, and other emerging technologies that would be too expensive and too slow to deploy if they were provisioned using traditional hardware and software.

What takes five minutes to do in the cloud could take a month or more if you follow the normal enterprise procurement cycles. Whats more, the traditional route could cost as much as 1,000 times the cloud capital costs needed to invest in physical servers and software. Remember, the initial value of cloud computing is its ability to save on capital expenses (CapEx) by moving that money to operational expenses (OpEx).

Also see: Why Cloud Means Cloud Native

With all that said, this is still an important question: Is cloud computing worth the cost?

Again, the truthful answer to this question is the good old, it depends. The multidimensional answer really depends upon your business, your industry, and how you expect to leverage technology now and into the future.

For example, lets say youre a small tire manufacturer in Ohio. You dont plan to expand the business anytime soon, only light innovation will occur in your business, and the business is very cost and margin sensitive. Cloud computing may not provide enough ROI benefits in this case. Factor in the cost of migrating to the cloud and the risks youll have to endure if there is no real benefit from the soft values of innovation, agility, and speed to deployment, then this business probably wont see the value of cloud computing.

On the other hand, if youre a brand-new tire manufacturer with no sunk costs in a data center and a new vision for disrupting the tire market, cloud will almost always be the best answer.

Looking at the vertical market aspect, a high-tech company in Silicon Valley will typically put a much higher value on innovation to build new products and services. Speed to deployment to support market efforts are normally on the critical path of enterprises in that industry. Agility also plays a role, which allows a company to change directions as the market changes around them. For companies in this market space, the cost of cloud computing could be 100 times that of traditional on premises computing, and it would still be worth it.

There is one more elephant in the room. As R&D spending by vendors shifted to cloud computing over the last 10 years, so did innovation. Today the best-of-breed technologies (including security, databases, analytics, and AI) typically exist only in the cloud.

Even enterprises that do not see the ROI value of a move to the cloud (such as our tire company example above) may find they have little choice as vendors begin to sundown their traditional on-premises systems and software. Many call this the forced march to the cloud as vendor R&D resources shift to build more in-demand products and services for the cloud. Weve dealt with these natural forces in the market many times in IT over the years. Recall the shift from mainframes to PCs, client/server, SOA, and other trends that took over vendor R&D spending in the past, and thus forced enterprise IT to follow the spending.

The reality is that most cloud computing is worth the cost. Cloud capabilities such as agility and innovation bring many positive benefits to businesses that can exploit them for growth. Even when cloud values are not as obvious, the market forces will continue to push many to the cloud, no matter if they can define the value for themselves, or not.

Eventually, non-cloud on-premises systems will go the way of 8-tracks and VCRs, partly due to new advances in cloud capabilities, mostly due to the lack of sales and support options. Technology constantly changes and the old eventually makes way for the new. Its a good time to look up at the long-term path and plan for the transition.

The rest is here:
Is Cloud Computing Worth the Cost? | eWEEK - eWeek

Read More..

What Is Cloud Automation?: a Quick, All-Around Guide – TechDecisions

Automation is about making life easier; as simple as using a free meeting notes template.

For large-scale companies, automation reduces errors, saves time and, performs repetitive tasks. This frees up their human employees to focus on more important jobs.

One area where its highly effective is in the cloud.

Companies like Netflix and Amazon and even those who buy vanity numbers use public clouds for their services. It allows them to innovate without being setback by legacy technology.

But, as organizations grow, the number of cloud-based tasks increases. It can take an army of humans days to complete. Cloud automation will take only minutes.

Solutions like Google cloud RPA or any alternatives to WebEX can empower enterprises. Yet, cloud automation is often lacking in enterprise environments. It can sound intimidating.

In this guide, well discuss this technology, answering two important questions.

What is cloud automation, and how can it help you reach the full potential of the cloud?

Cloud automation uses automated tools to carry out workflows in a cloud environment. These workflows would otherwise occur manually.

Its like on-premise automation, and many of the same tools are used for both. But, there are specialized tools for the cloud.

Unlike on-premise automation, cloud automation focuses on automating services and virtual infrastructure. Its also suited to handle the scalability and complexity of the cloud.

Automation isnt built into the cloud, so it can be costly and hard to set up. It requires expertise to carry out, but its a crucial element for any cloud strategy.

Good cloud infrastructure encourages automation. This enables you to get the most value from your cloud services.

Cloud automation means cloud resources are used efficiently. It reduces manual workloads, minimizes errors, and improves security.

Automation and b2b seo agency provide big opportunities for scalability, agility, and efficiency. In simple terms, its able to perform complex tasks with the click of a button.

This speeds up how well your organization can adapt, which is helpful in todays business climate. With cloud automation, you can respond to challenges and innovate faster.

Thus, its essential to understand what you can automate and softwares youll need, from the best affiliate marketing tools to the most reliable data backup software, that will help in achieving your aims.

From this, you can build an effective cloud strategy and accelerate digital transformation.

Weve answered two questions. What is cloud automation and why you should use it. Now lets look at a few examples of how it can be applied.

Reducing manual workloads is a crucial part of any automation. Infrastructure provisioning is a typical use case when it comes to cloud automation.

Imagine you want to set up a collection of virtual servers. Configuring them one-by-one would take a team a long time.

Cloud automation tools can perform this task by automating template creation. The templates define each virtual server configuration. It can then apply them.

This type of infrastructure provisioning is known as an infrastructure-as-code (IAC) tool. It can be used to configure other types of cloud resources.

This type of automation allows organizations to scale their cloud infrastructure quickly. It gives them the added advantage of agility, and the ability to innovate more rapidly.

Some organizations could have hundreds of staff members, each requiring different privileges.

Setting up each policy manually will be a drawn-out processlong and potentially error-ridden. As employees come and go, managing access rights to cloud resources will be difficult.

Like in the example above, cloud automation can create templates. This time its for Identity and Access Management (IAM). These templates set up the user roles within your cloud environment.

This can be integrated into a central enterprise directory service. With it, identities across both cloud and non-cloud resources can be managed. This could be helpful for organizations that use mdm software solutions.

Using automation here goes beyond organizational agility. Onboarding new team members and modifying roles become easier and more efficient.

Not only does this save time, but it ensures a greater level of security.

Its not unusual for companies to use many private and public clouds at once.

In this situation, cloud automation is crucial. It allows teams to deploy workloads to many clouds at once. They can then manage them from a single interface.

Organizations with a multi-cloud strategy can increase efficiency with centralized, automated tools.

These are a few common cloud automation examples. Several other typical tasks can be automated in the cloud:

Cloud automation can be tricky to set up, so why would you go through with it at all?

Repetitive tasks are tedious tasks. By automating low-level manual processes, your staff saves time. With less pressure, theyre able to focus on more exciting projects and tasks.

The more people who can access a sensitive task, the more likely an accidental security leak will be. Ransomware attacks have also become a cause for concern. Automation reduces security vulnerabilities like these, by limiting non-essential access.

With cloud automation, tasks are carried out faster, with the same high quality. What once took several days might now take a few minutes.

Human error is as sure as death. Using enterprise robotic process automation reduces errors for non-cloud processes. The same applies in the cloud.

As long as automation rules are correctly configured, errors will be a rarity. The need for oversight is also no longer required.

You can manage a small environment without automation.

But, if you want to grow and scale your business, cloud automation is a necessity. After all, more users mean more tasks.

Cloud automation and orchestration are often used like they mean the same thing. But there is a difference when it comes to cloud automation vs cloud orchestration.

Cloud automation refers to the automation of a single task. Cloud orchestration refers to the automation of a host of tasks. It involves the automation of workflows across separate services and clouds.

In simpler terms, different cloud automation tasks can be coordinated and automated with orchestration.

For example, imagine you want to install an operating system on a server. The cloud automation steps you might use could be:

With cloud automation, these are four distinct tasks. Each task has to be done individually and in the right order.

With orchestration, these tasks would be combined into one workflow. This permits the entire server set up to be automated in the correct order. Its like pressing one button instead of four.

Cloud orchestration is essential for an enterprise setting. Here there are often too many cloud automation processes to manage on an individual basis.

Using a mix of both cloud automation and orchestration is vital. Used in conjunction it increases productivity and efficient workflows.

Its particularly useful for multi-cloud solutions. Here, you may need to coordinate tasks across different services, teams, and environments. This would also reduce costly errors.

Cloud automation is in many ways a sub-category of cloud orchestration. You can have cloud automation without cloud orchestration, but cloud orchestration needs automation.

The tools available for cloud native automation could fill a book. But, it can be split into two distinct categories:

These automation tools are built into their respective platforms. As a result, they offer the highest level of integration. New cloud functionalities are thus immediately available.

Some examples are AWS CloudFormation and Azure Resource Manager.

As with anything though, there is a downside.

These tools generally only support the clouds that they are a part of. You cant apply them to any other cloud, and youre very much locked into your platform.

Independent vendors normally create third-party tools that are usable on any platform.

In general, these tools will work with any public, private or hybrid cloud platform. They are often open-source, though there are commercial options available.

They can have extra features and versatility that built-in platforms lack.

Some examples include Puppet, Ansible, Chef, Salt, and Hashicorp Terraform.

Unfortunately, these automation tools lag in implementing functionality. As they arent as integrated, theyre often playing catch up.

So, when a cloud provider introduces a new feature, it could be a while before you can use it.

As with any decision in the business world, the one you make will be based on your needs. Sticking to mature, established platforms will bring you greater stability than newer technology.

The last few years have shown that things can change very quickly for businesses. How well you can adapt can be the defining trait that determines your survival.

A cloud-based business process automation solution can streamline, optimize and scale your business. Its the only way to reach the full potential of your cloud environment.

By automating cloud management tasks, businesses are more agile. This gives them the ability to innovate quicker when faced with challenges. Its essential for any large-scale cloud environment.

Furthermore, employees no longer have to spend time and resources on repetitive tasks. They can focus on developing exciting new ideas and tasks that arent automated.

Your long-term cloud management success depends on choosing an appropriate automation tool.

Define your budget and goals to have a clear picture of what you want to achieve.

Now that you have an idea of what you can automate, consider what tools you need for your uses to orchestrate your clouds. The rest is childs play.

Grace Lau is the Director of Growth Content at Dialpad, an AI-powered cloud communication platform for better and easier team collaboration through a better local caller ID service. She has over 10 years of experience in content writing and strategy. Currently, she is responsible for leading branded and editorial content strategies, partnering with SEO and Ops teams to build and nurture content.

More:
What Is Cloud Automation?: a Quick, All-Around Guide - TechDecisions

Read More..