Page 2,917«..1020..2,9162,9172,9182,919..2,9302,940..»

Baden-Wrttemberg invests in a joint project to develop adaptive artificial intelligence chips – TheMayor.EU

Baden-Wrttemberg funds a project on adaptive Artificial Intelligence chips

2 million euros to expand the range of competencies for a technology transfer

The German state of Baden-Wrttemberg is funding a joint project on adaptive Artificial Intelligence (AI) chips with around 2 million euros. This will further expand the range of competencies for technology transfer in this key discipline of artificial intelligence.

In this wat, the Baden-Wrttemberg Ministry of Economics, Labour and Tourism supports the joint project on adaptive AI chips "Microelectronics for AI - data-oriented implementation in industrial use (DoRiE)" organized jointly by the Institute for Microelectronics Stuttgart (IMS CHIPS), the Research Center for Computer Science Karlsruhe (FZI) and the Hahn-Schickard Society for Applied Research eV.

The three business-related institutes of the Baden-Wrttemberg Innovation Alliance are jointly implementing AI systems for decentralised use on sensors, robots or machines in industrial applications. The three institutes are supported by an innovation advisory board with representatives from business.

Artificial intelligence is the new basic technology in many areas of life and has gigantic value creation potential. With the project, we are further expanding the range of competencies for technology transfer in this key discipline of artificial intelligence. We are thus taking an important step towards industrial application in our medium-sized companies. Many companies from a wide variety of industries in Baden-Wrttemberg can benefit significantly from the offer, commented the Minister of Economic Affairs, Dr. Nicole Hoffmeister-Kraut.

The research organisations have already received expressions of interest for the innovation advisory board from well-known small and medium-sized manufacturing companies. Interested organisations may also join the advisory board during the project's duration.

Furthermore, the aim is to collaborate with state-based application-oriented AI research projects such as the "Learning Systems and Cognitive Robotics Progress Center" or the "Competence Center for AI Engineering CC-KING."

Various functions and components for industrial use are developed. Applications here are, for example, sensor solutions with integrated local AI for object recognition, Edge AI solutions for robotic arms in lightweight construction and collaborative applications, or for use on gripping systems with local intelligence.

More:
Baden-Wrttemberg invests in a joint project to develop adaptive artificial intelligence chips - TheMayor.EU

Read More..

Global Artificial Intelligence in Construction Market Expected to Generate a Revenue of $ 2642.4 Million at a – GlobeNewswire

New York, USA, May 11, 2021 (GLOBE NEWSWIRE) -- According to a report published by Research Dive, theglobal artificial intelligence in construction marketis anticipated to register a revenue of$2,642.4 million at a CAGR of 26.3%during the forecast period. The inclusive report provides a brief overview of the current scenario of the market including significant aspects of the market from growth factors, challenges, other market dynamics, restraints and various opportunities during the forecast period. The report also provides all the market figures making it easier and helpful for the new participants to understand the market.

DownloadFREE Sample Here! @https://www.researchdive.com/download-sample/46

Dynamics of the Market

The cost-effectiveness and easy accessibility of advanced artificial intelligence products are the main factors driving the growth of the AI in construction market. The usage of AI helps companies by calculating the overhead cost and provides accurate data related to companies overall expenditure, which saves a lot of money for the company. Moreover, the AI devices such as robots and drones help the construction site workers by enabling them mapping and surveying and taking the perfect decision on site. This is another factor enhancing the growth of the global artificial intelligence in construction market.

One of the biggest restraining factors behind the growth of the market is the lack of technically skilled workers.

Segments of the Market

The report has divided the market into different segments based on application and regional outlook.

Checkout How COVID-19 impacts theArtificial Intelligence in Construction Market. Click Here to Speak our Expertise before buying Report & Get More Market Insights @https://www.researchdive.com/connect-to-analyst/46

Planning and Design Sub-Segment is Expected to Become the Most Lucrative

By application, planning and design sub-segment accounted for $134.3 million in 2018 and is further predicted to grow at a CAGR of 28.9% during the upcoming years. Planning and design is an indispensable application for the construction companies. This is the main reason behind the growth of the market segment.

North America to Dominate the Market

North America regional market recorded a revenue of $146.9 million in 2018, and is further expected to grow at a CAGR of 25.4% during the forecast period. The major attributor behind this growth is the huge population base of North American countries with high purchasing power, constant investment of the government in the automation, and initiative in artificial intelligence in the construction sector.

Request forArtificial Intelligence in Construction Market Report Customization & Get 10% Discount on this Report@https://www.researchdive.com/request-for-customization/46

Key Players of the Market

The report mentions the key players of the global artificial intelligence in construction market which include

These players are focusing on research and development, product launches, collaborations and partnerships to sustain the growth of the market. For instance, in November 2020, Dassault Systmes announced jointly with NuoDB, that Dassault Systmes, which already had a 16% ownership interest, is acquiring the remainder of NuoDB equity.

Based in Cambridge, Massachusetts, NuoDB provides a cloud-native distributed SQL database that capitalizes on the competitive advantages of the cloud, with on demand scalability, continuous availability and transactional consistency, and is built for mission critical applications.

The report also summarizes many important aspects including financial performance of the key players, SWOT analysis, product portfolio, and latest strategic developments.

In Addition, the report having some numorus point about the leading Business Manufactures, Like, SWOT analysis, Product Portfolio, Finanical Status -Inquire to Get access for DetailedTop Companies Development Strategy Report

TRENDING REPORTS WITH COVID-19 IMPACT ANALYSIS-

The rest is here:
Global Artificial Intelligence in Construction Market Expected to Generate a Revenue of $ 2642.4 Million at a - GlobeNewswire

Read More..

Europe Seeks To Tame Artificial Intelligence With The World’s First Comprehensive Regulation – Technology – Worldwide – Mondaq News Alerts

In what could be a harbinger of the future regulation ofartificial intelligence (AI) in the United States, the EuropeanCommission published its recent proposal for regulation of AI systems. Theproposal is part of the European Commission's larger European strategy for data, which seeks to"defend and promote European values and rights in how wedesign, make and deploy technology in the economy." To thisend, the proposed regulation attempts to address the potentialrisks that AI systems pose to the health, safety, and fundamentalrights of Europeans caused by AI systems.

Under the proposed regulation, AI systems presenting the leastrisk would be subject to minimal disclosure requirements, while atthe other end of the spectrum AI systems that exploit humanvulnerabilities and government-administered biometric surveillancesystems are prohibited outright except under certain circumstances.In the middle, "high-risk" AI systems would be subject todetailed compliance reviews. In many cases, such high-risk AIsystem reviews will be in addition to regulatory reviews that applyunder existing EU product regulations (e.g., the EU alreadyrequires reviews of the safety and marketing of toys and radio frequency devices such as smart phones,Internet of Things devices, and radios).

The proposed AI regulation applies to all providers that marketin the EU or put AI systems into service in the EU as well as usersof AI systems in the EU. This scope includes governmentalauthorities located in the EU. The proposed regulation also appliesto providers and users of AI systems whose output is used withinthe EU, even if the producer or user is located outside of the EU.If the proposed AI regulation becomes law, the enterprises thatwould be most significantly affected by the regulation are thosethat provide high-risk AI systems not currently subject to detailedcompliance reviews under existing EU product regulations, but thatwould be under the AI regulation.

The term "AI system" is defined broadly as softwarethat uses any of several identified approaches to generate outputsfor a set of human-defined objectives. These approaches cover farmore than artificial neural networks and other technologiescurrently viewed by many as traditional as "AI." In fact,the identified approaches cover many types of software that fewwould likely consider "AI," such as "statisticalapproaches" and "search and optimization methods."Under this definition, the AI regulation would seemingly cover theday-to-day tools of nearly every e-commerce platform, social mediaplatform, advertiser, and other business that rely on suchcommonplace tools to operate.

This apparent breadth can be assessed in two ways. First, thisdefinition may be intended as a placeholder that will be furtherrefined after the public release. There is undoubtedly no perfectdefinition for "AI system," and by releasing the AIregulation in its current form, lawmakers and interested partiescan alter the scope of the definition following public commentaryand additional analysis. Second, most "AI systems"inadvertently caught in the net of this broad definition wouldlikely not fall into the high-risk category of AI systems. In otherwords, these systems generally do not negatively affect the healthand safety or fundamental rights of Europeans, and would only besubject to disclosure obligations similar to the data privacyregulations already applicable to most such systems.

The proposed regulation prohibits uses of AI systems forpurposes that the EU considers to be unjustifiably harmful. Severalcategories are directed at private sector actors, includingprohibitions on the use of so-called "dark patterns"through "subliminal techniques beyond a person'sconsciousness," or the exploitation of age, physical or mentalvulnerabilities to manipulate behavior that causes physical orpsychological harm.

The remaining two areas of prohibition are focused primarily ongovernmental actions. First, the proposed regulation would prohibituse of AI systems by public authorities to develop "socialcredit" systems for determining a person'strustworthiness. Notably, this prohibition has carveouts, as suchsystems are only prohibited if they result in a "detrimentalor unfavorable treatment," and even then only if unjustified,disproportionate, or disconnected from the content of the datagathered. Second, indiscriminate surveillance practices by lawenforcement that use biometric identification are prohibited inpublic spaces except in certain exigent circumstances, and withappropriate safeguards on use. These restrictions reflect theEU's larger concerns regarding government overreach in thetracking of its citizens. Military uses are outside the scope ofthe AI regulation, so this prohibition is essentially limited tolaw enforcement and civilian government actors.

"High-risk" AI systems receive the most attention inthe AI regulation. These are systems that, according to thememorandum accompanying the regulation, pose a significant risk tothe health and safety or fundamental rights of persons. This boilsdown to AI systems that (1) are a regulated product or are used asa safety component for a regulated product like toys, radioequipment, machinery, elevators, automobiles, and aviation, or (2)fall into one of several categories: biometric identification,management of critical infrastructure, education and training,human resources and access to employment, law enforcement,administration of justice and democratic processes, migration andborder control management, and systems for determining access topublic benefits. The regulation contemplates this latter categoryevolving over time to include other products and services, some ofwhich may face little product regulation at present. Enterprisesthat provide these products may be venturing into an unfamiliar andevolving regulatory space.

High-risk AI systems would be subject to extensive requirements,necessitating companies to develop new compliance and monitoringprocedures, as well as make changes to products both on the frontend and the back end such as:

The regulation would impose transparency and disclosurerequirements for certain AI systems regardless of risk. Any AIsystem that interacts with humans must include disclosures to theuser they are interacting with an AI system. The AI regulationprovides no further details on this requirement, so a simple noticethat an AI system is being used would presumably satisfy thisregulation. Most "AI systems" (as defined in theregulation) would fall outside of the prohibited and high-riskcategories, and so would only be subject to this disclosureobligation. For that reason, while the broad definition of "AIsystem" captures much more than traditional artificialintelligence techniques, most enterprises will feel minimal impactfrom being subject to these regulations.

The proposed regulation provides for tiered penalties dependingon the nature of the violation. Prohibited uses of AI systems(subliminal manipulation, exploitation of vulnerabilities, anddevelopment of social credit systems) and prohibited development,testing, and data use practices could result in fines of the higherof either 30,000,000 EUR or 6% of a company's total worldwideannual revenue. Violation of any other requirements or obligationsof the proposed regulation could result in fines of the higher ofeither 20,000,000 EUR or 4% of a company's total worldwideannual revenue. Supplying incorrect, incomplete, or misleadinginformation to certification bodies or national authorities couldresult in fines of the higher of either 10,000,000 EUR or 2% of acompany's total worldwide annual revenue.

Notably, EU government institutions are also subject to fines,with penalties up to 500,000 EUR for engaging in prohibitedpractices that would result in the highest fines had the violationbeen committed by a private actor, and fines for all otherviolations up to 250,000 EUR.

The proposed regulation remains subject to amendment andapproval by the European Parliament and potentially the EuropeanCouncil, a process which can take several years. During this longlegislative journey, components of the regulation could changesignificantly, and it may not even become law.

Although the proposed AI regulation would mark the mostcomprehensive regulation of AI to date, stakeholders should bemindful that current U.S. and EU laws already govern some of theconduct it attributes to AI systems. For example, U.S. federal lawprohibits unlawful discrimination on the basis of a protected classin numerous scenarios, such as in employment, the provision ofpublic accommodations, and medical treatment. Uses of AI systems thatresult in unlawful discrimination in these arenas already posesignificant legal risk. Similarly, AI systems that affect publicsafety or are used in an unfair or deceptive manner could beregulated through existing consumer protection laws.

Apart from such generally applicable laws, U.S. laws regulatingAI are limited in scope, and focus on disclosures related to AI systems interacting with people or arelimited to providing guidance under current law in anindustry-specific manner, such as with autonomous vehicles. There is also a movementtowards enhanced transparency and disclosure obligations for userswhen their personal data is processed by AI systems, as discussedfurther below.

To date, no state or federal laws specifically targeting AIsystems have been successfully enacted into law. If the proposed EUAI regulation becomes law, it will undoubtedly influence thedevelopment of AI laws in Congress and state legislatures, andpotentially globally. This is a trend we saw with the EU'sGeneral Data Protection Regulation (GDPR), which has shaped newdata privacy laws in California, Virginia, Washington, and severalbills before Congress, as well as laws in other countries.

U.S. legislators have so far proposed bills that would regulateAI systems in a specific manner, rather than comprehensively as theEU AI regulation purports to do. In the United States, "algorithmic accountability"legislation attempts to address concerns about high-risk AIsystems similar to those articulated in the EU throughself-administered impact assessments and required disclosures, butlacks the EU proposal's outright prohibition on certain uses ofAI systems, and nuanced analysis of AI systems used by governmentactors. Other bills would solely regulate government procurementand use of AI systems, for example, California AB-13 and Washington SB-5116, leaving industry free todevelop AI systems for private, nongovernmental use. Upcomingprivacy laws such as the California Privacy Rights Act (CPRA) and theVirginia Consumer Data Protection Act (CDPA),both effective January 1, 2023, do not attempt to comprehensivelyregulate AI, instead focusing on disclosure requirements and datasubject rights related to profiling and automateddecision-making.

Ultimately, the AI regulation (in its current form) will haveminimal impact on many enterprises unless they are developingsystems in the "high-risk" category that are notcurrently regulated products. But some stakeholders may besurprised, and unsatisfied with, the fact that the draftlegislation puts relatively few additional restrictions on purelyprivate sector AI systems that are not already subject toregulation. The drafters presumably did so to not overly burdenprivate sector activities. But it is yet to be seen whether anyenacted form of the AI regulation would strike that balance in thesame way.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

View original post here:
Europe Seeks To Tame Artificial Intelligence With The World's First Comprehensive Regulation - Technology - Worldwide - Mondaq News Alerts

Read More..

DataRobot Joins World Economic Forum Initiative to Advance the Equity, Accountability, and Transparency of Artificial Intelligence – Business Wire

BOSTON--(BUSINESS WIRE)--DataRobot, the leader in enterprise AI, today announced that it has joined the Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning initiative launched by the World Economic Forum to accelerate the societal benefits of AI and machine learning while ensuring equity, privacy, transparency, accountability, and social impact.

This initiative brings together key stakeholders from the public and private sectors to co-design and test policy frameworks that accelerate the benefits and mitigate the risks of AI and machine learning. Project areas include standards for protecting children, creating an AI regulator for the 21st century, and addressing the unique challenges of facial recognition technology. As members of the initiative, DataRobot will work closely with researchers, organizations, and other key stakeholders to drive new understandings of how AI can and should be used to better society, while ensuring use cases are ethical and equitable.

Ted Kwartler, VP of Trusted AI at DataRobot, said, As a leader in artificial intelligence and machine learning, it is our responsibility to play an active role in ensuring that AI will be used for the betterment of society. We are standing at a critical technological moment in history for companies to drive change and shape a more equitable, AI-powered future for the benefit of all, not just the benefit of a few. With the goal of improving organizational behavior widely and continuing to do good with technology, we are pleased to join forces with the World Economic Forum to make new alliances, start new conversations, and mobilize the resources needed to make the world of technology more sustainable and inclusive. We are excited to take part in this valuable platform, share our learnings across the industry, and work with the World Economic Forum to build a more ethical, explainable, and equitable AI ecosystem.

DataRobots partnership with the World Economic Forum follows its long-standing commitment to AI trust, governance, and ethics, highlighted by the formation of a Trusted AI team in 2019, which is led by Kwartler. The teams mission is to build and deliver trustworthy and ethical AI systems and provide actionable guidance for the companys customers. These customers include some of the largest banks in the world, top U.S. health insurers, and defense, intelligence, and civilian agencies within the federal government.

Machine learning and artificial intelligence are rapidly advancing and are being deployed across all aspects of daily life. As this technology continues to develop and adoption grows, collaboration across organizations is essential to optimizing accountability, transparency, privacy, and impartiality. This initiative brings together experts who are not only looking to explore the positive impact AI can have on society at large, but who will also ensure trust in the organizations and individuals leveraging the technology, said Kay Firth-Butterfield, Head of AI & Machine Learning and Member of the Executive Committee of the World Economic Forum.

To learn more about DataRobots commitment to ensuring ethical, trustworthy, and unbiased AI and machine learning, visit http://www.datarobot.com/platform/trusted-ai/.

About DataRobotDataRobot is the leader in enterprise AI, delivering trusted AI technology and enablement services to global enterprises competing in todays Intelligence Revolution. DataRobots enterprise AI platform democratizes data science with end-to-end automation for building, deploying, and managing machine learning models. This platform maximizes business value by delivering AI at scale and continuously optimizing performance over time. The companys proven combination of cutting-edge software and world-class AI implementation, training, and support services, empowers any organization regardless of size, industry, or resources to drive better business outcomes with AI.

DataRobot has offices across the globe and funding from some of the worlds best investing firms including Alliance Bernstein, Altimeter, B Capital Group, Cisco, Citi Ventures, ClearBridge, DFJ Growth, Geodesic Capital, Glynn Capital, Intel Capital, Meritech, NEA, Salesforce Ventures, Sands Capital, Sapphire Ventures, Silver Lake Waterman, Snowflake Ventures, Tiger Global, T. Rowe Price, and World Innovation Lab. DataRobot was named to the Forbes 2020 Cloud 100 list and the Forbes 2019, 2020, and 2021 Most Promising AI Companies lists, and was named a Leader in the IDC MarketScape: Worldwide Advanced Machine Learning Software Platforms Vendor Assessment. For more information visit http://www.datarobot.com/, and join the conversation on the DataRobot Community, More Intelligent Tomorrow podcast, Twitter, and LinkedIn.

Original post:
DataRobot Joins World Economic Forum Initiative to Advance the Equity, Accountability, and Transparency of Artificial Intelligence - Business Wire

Read More..

Need more budget for artificial intelligence projects? Point out what the competition may be doing – ZDNet

One can be forgiven for thinking that everyone in the world is adopting sophisticated, next-gen technologies such as artificial intelligence and autonomous systems, and their company is falling woefully behind. While it's more the case of everyone trying to find their way with still yet-to-be-fully-understood technologies, this fear of falling behind is real, and is driving investment.

That's the word from asurveyof 200 enterprises from Seeqc, which finds rising investment in deep-tech solutions is largely driven by the threat of industry competition, with substantial R&D budgets and jobs on the line. More than two-thirds, 67%, fear their competitors are further along than their company.

That's certainly a way to get the full attention of business leaders controlling the purse strings.

At the same time, many have high expectations from these investments. Most respondents (58%) said they expect to see ROI from deep tech investments within one to five years. While specific technologies each come with their own implementation timetables, deep tech's impending business impact is accelerating with each dollar spent.

The survey's authors call this "deep tech," which they define as solutions aimed at substantial scientific or engineering challenges to previously intractable problems. (In other words, it has artificial intelligence written all over it.) Along with AI, this category includes solutions such as autonomous vehicles, blockchain, and even quantum computing.

The survey finds that decision-makers are under immense pressure and time constraints to source solutions to fast-approaching business challenges. A majority of large enterprises (defined as those with 1,000 or more employees), 57%, are actively investigating deep tech solutions are doing so to solve a specific existing or emerging business problem. I actually like the phrase "deep tech" to describe the constellation of next-generation solutions coming on the scene.

While companies are forging ahead to solve specific challenges, the report also shows they're keeping a close eye on their competitors' progress. Either real or perceived, fear of their peers' progress is a major investment driver. More than a third of respondents said that keeping up with competition was their number one reason for investigating deep tech solutions.

Motivations driving investments in these technologies include the following:

Skills and people issues dominate executives' concerns as they dive into new technology approaches as well. A majority, 52%, say assembling the right internal team with appropriate technical expertise as their greatest challenge, making this the leading area of concern. .

Deep tech solutions require up-front investments. Seventy-one percent of companies reported dedicating 15% or more of their entire R&D budget to investigating deep tech solutions, with 16% dedicating more than a quarter of their budgets. Investing large sums requires a great deal of research, forethought, and a willingness to take some risks. Eight-two percent of decision-makers have fears or anxieties about investing and implementing deep tech solutions, the survey's authors report. Another 74% fear making the wrong investment and wasting resources. There's also fear of what could happen to jobs -- 71% fear deep tech solutions will make parts of their business or even jobs obsolete.

Excerpt from:
Need more budget for artificial intelligence projects? Point out what the competition may be doing - ZDNet

Read More..

ITS Internet Security and Privacy Policy | New York State …

Overview

Thank you for visiting the NYS Office of Information Technology Services (ITS) website. This website is designed to make it easier and more efficient for New York State citizens and businesses to learn about technology initiatives in New York State (State) government and to interact with ITS. ITS recognizes that visitors to this website are concerned about their privacy. ITS is committed to preserving your privacy when visiting this website.

Consistent with the provisions of the Internet Security and Privacy Act, the Freedom of Information Law and the Personal Privacy Protection Law, this policy describes ITS's privacy practices regarding information collected from users of this website. This policy describes what data is collected and how that information is used. ITS may, at its sole discretion, change modify, add, or delete portions of this policy. Because this privacy policy only applies to this website, you should examine the privacy policy of any website, including other State websites, you access through this website.

For purposes of this policy, personal information means any information concerning a natural person, as opposed for instance to a corporate entity, which, because of name, number, symbol, mark, or other identifier, can be used to identify that natural person. ITS only collects personal information about you when you provide that information voluntarily by sending an e-mail or by initiating an online transaction, such as a survey, registration or order form.

Information Collected Automatically When You Visit This Website

When visiting this website ITS automatically collects and stores the following information about your visit:

None of the foregoing information is deemed to constitute personal information.

This information that is collected automatically is used to improve this website's content and to help ITS understand how people are using this website. This information is collected for statistical analysis, to determine what information is of most and of least interest to our visitors, and to identify system performance or problem areas. The information is not collected for commercial marketing purposes and ITS does not sell or distribute the information collected from the website for commercial marketing purposes.

Cookies

Cookies are simple text files stored on your web browser to provide a means of distinguishing among users of this website. The use of cookies is a standard practice among Internet websites. In order to better serve you, we may use "temporary" cookies to enhance, customize or enable your visit to this web site. Temporary cookies do not contain personal information and do not compromise your privacy or security and are erased during the operation of your browser or when your browser is closed.

During your visit to this website, you may complete a registration form in order to personalize your use of the website. In such an event, we may deliver a "persistent" cookie which would be stored on your computer's hard drive. This persistent cookie will allow the website to recognize you when you visit again and tailor the information presented to you based on your needs and interests. ITS uses persistent cookies only with your permission.

The software you use to access the website allows you to refuse new cookies or delete existing cookies. Refusing or deleting these cookies may limit your ability to take advantage of some of the features of this website.

Information Collected When You Email this Website or Initiate an Online Transaction Through this Website

If during your visit to this website you send an email to ITS, your email address and the contents of your email will be collected. The information collected is not limited to text characters and may include audio, video, and graphic information formats you send us. The information is retained in accordance with the public record retention provisions in the State Arts and Cultural Affairs Law. Your email address and the information contained in your email will be used to respond to you, to address issues you identify, to further improve this website, or to forward your email to another agency for appropriate action. Your email address is not collected for commercial marketing purposes and ITS does not sell or distribute your email address for any purposes.

During your visit to this website you may initiate a transaction such as a survey,registration or order form. The information, including personal information, volunteered by you in initiating the transaction is used by ITS to operate ITS programs, which include the provision of goods, services and information. The information collected by ITS may be disclosed by ITS for those purposes that may be reasonably ascertained from the nature and terms of the transaction in connection with which the information was submitted by you.

Currently, ITS does not knowingly collect personal information from children or create profiles of children through this website. People are cautioned that the collection of personal information provided by any individual in an email or through an online transaction will be treated the same as information given by an adult, and may, unless exempted from access by federal or State law, be subject to public access. ITS encourages parents and teachers to be involved in children's Internet activities and to provide guidance whenever children are asked to provide personal information on-line.

Information and Choice

As noted above, ITS does not collect any personal information about you during your visit to this website, unless you provide that information voluntarily by sending an e-mail or initiating an online transaction such as a survey, registration, or order form. You may choose not to send us an e-mail, respond to a survey or complete an order form. While your choice not to participate in these activities may limit your ability to receive specific services or products through this website, it will not prevent you from requesting services or products from ITS by other means, and will not normally have an impact on your ability to take advantage of other features of the website, ncluding browsing or downloading most publicly available information.

Disclosure of Information Collected Through This Website

The collection of information through this website and the disclosure of that information are subject to the provisions of the Internet Security and Privacy Act.ITS will only collect personal information through this website, or disclose such personal information, if the user has consented to the collection and disclosure of such personal information. The voluntary disclosure of personal information to ITS by the user, whether solicited or unsolicited, constitutes consent to the collection and disclosure of the information by ITS for the purposes for which the user disclosed the information to ITS, as was reasonably ascertainable from the nature and terms of the disclosure.

However, ITS may collect or disclose personal information without user consent if the collection or disclosure is: (1) necessary to perform the statutory duties of ITS, or necessary for ITS to operate a program authorized by law, or authorized by state or federal statute or regulation; (2) made pursuant to a court order or by law; (3) for the purpose of validating the identity of the user; or (4) of information to be used solely for statistical purposes that is in a form that cannot be used to identify any particular person.

Further, the disclosure of information, including personal information, collected through this website is subject to the provisions of the Freedom of Information Law and the Personal Privacy Protection Law. Additionally, ITS may disclose personal information to federal or State law enforcement authorities to enforce its rights against unauthorized access or attempted unauthorized access to ITS's information technology assets and any other inappropriate use of its website.

Retention of Information Collected Through this Website

The information collected through this website is retained by ITS in accordance with the records retention and disposition requirements of the New York State Arts and Cultural Affairs Law. Information on the requirements of the Arts and Cultural Affairs Law may be found at http://www.archives.nysed.gov/records/mr_laws_acal5705.shtml. In general, the Internet services logs of ITS, comprising electronic files or automated logs created to monitor access and use of state agency services provided through this website, are retained for one year and then destroyed. Information, including personal information that you submit in an e-mail or when you initiate an online transaction such as a survey, registration form, or order form is retained in accordance with the records retention and disposition schedule established for the records of the program unit to which you submitted the information. Information concerning these record retention and disposition schedules may be obtained through the Internet privacy policy contact listed in this policy.

Access to and Correction of Personal Information Collected Through this Website

Any user may submit a request to the ITS privacy officer to determine whether personal information pertaining to that user has been collected through this website. Any such request shall be made in writing to the address below and must be accompanied by reasonable proof of identity of the user. Reasonable proof of identity may include verification of a signature, inclusion of an identifier generally known only to the user, or similar appropriate identification. The address of ITSs privacy compliance officer is: Privacy Officer, Office for Technology, State Capitol ESP, PO Box 2062, Albany, New York 12220-0062.

The privacy compliance officer shall, within five (5)business days of the date of the receipt of a proper request: (i) provide access to the personal information; (ii) deny access in writing, explaining the reasons therefore; or (iii) acknowledge the receipt of the request in writing, stating the approximate date when the request will be granted or denied, which date shall not be more than thirty (30)days from the date of the acknowledgment.

In the event that ITS has collected personal information pertaining to a user through the state agency website, and that information is to be provided to the user pursuant to the users request, the privacy compliance officer shall inform the user of his or her right to request that the personal information be amended or corrected under the procedures set forth in section 95 of the Public Officers Law.

Confidentiality and Integrity of Personal Information Collected Through this Website

ITS limits employee access to personal information collected through this website to only those employees who need access to the information in the performance of their official duties. Employees who have access to this information are required to follow appropriate procedures in connection with any disclosures of personal information.

In addition, ITS has implemented procedures to safeguard the integrity of its information technology assets, including, but not limited to, authentication, monitoring, auditing and encryption. These Security measures have been integrated into the design, implementation, and day-to-day operations of this website as part of our continuing commitment to the security of electronic content as well as the electronic transmission of information.

NOTE: The information contained in this policy should not be construed in any way as giving business, legal, or other advice, or warranting as fail proof, the security of information provided via this website. For site security purposes and to ensure that this website remains available to all users, ITS employs software to monitor traffic to identify unauthorized attempts to upload or change information or otherwise cause damage to this website.

Links Disclaimer

In order to provide visitors with certain information, this website provides links to local, State and federal government agencies, and websites of other organizations. A link does not constitute an endorsement of the content, viewpoint, accuracy, opinions, policies, products, services, or accessibility of that website. Once you link to another website from this website, including one maintained by the State, you are subject to the terms and conditions of that website, including, but not limited to, its privacy policy.

Information Disclaimer

Information provided on this website is intended to allow the public immediate access to public information. While all attempts are made to provide accurate, current, and reliable information, ITS recognizes the possibility of human and/or mechanical error. Therefore, ITS, its employees, officers and agents make no representations as to the accuracy, completeness, currency, or suitability of the information provided by this website, and deny any expressed or implied warranty as to the same.

Contact Information

For questions regarding this Privacy Policy please email ITS at [emailprotected].

Original post:
ITS Internet Security and Privacy Policy | New York State ...

Read More..

Cyber Security Today, May 12, 2021 – Hate on messaging apps, Zix used in scams and QR code warning – IT World Canada

Fight hate on private messaging apps, how Zix is used for scams, a warning on QR codes and more.

Welcome to Cyber Security Today. Its Wednesday, May 12th. Im Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com.

The government of Canada should do more to stop disinformation on private internet messaging platforms like WhatsApp, Telegram, WeChat, Facebook Messenger and Snapchat. Thats the recommendation of the cybersecurity policy exchange at Torontos Ryerson University. Theres a lot of discussion about disinformation on public social media platforms like Twitter and Facebook. But in a report issued Tuesday the exchange says private messaging apps are also being abused by fake news, hate speech, sexual comments and materials that incite violence. In a survey of 2,500 Canadians, a quarter of respondents said they get messages with hate speech at least once a month. Rates are higher among people of colour. Almost half said they get private messages at least once a month that they suspect are false. Some platforms label suspect messages and limit the number of targets that suspect messages can go to. But the report says the federal government should do more, including improving digital literacy so people can spot falsehoods, and demanding transparency from private messaging platforms on how many accounts host and distribute bad material. Theres a link to the full report here.

Recently I told you a ransomware gang had threatened to release confidential files of the Washington, D.C. police department unless it was paid. According to news reports the gang says it has started putting that data online. If true the files could damage police operations. Meanwhile the city of Tulsa, Oklahoma has suffered a ransomware attack.

Hackers are abusing the Zix secure messaging service. Heres how it works, according to a cybersecurity company called Abnormal Security: Victims get a phishing message from a companys compromised email account. For example, one message came from a real estate title searching firm and went to a legal firm or someone trying to buy a house. The attachment claims to have a closing settlement counteroffer for a residence. The header on the link looks like it goes through Zix, which checks links. Those who know about Zix are supposed to be reassured. But the link goes to a page where victims are asked to enter their Microsoft login credentials to see a document. The reason why some anti-malware systems may miss this scam is the use of Zix. Be careful with any messages that have links to documents where you have to enter a password. You may be giving away access to your computer.

QR codes are black-and-white speckled squares that are scanned with a smartphone to get access to services or apps. But be careful what you scan: Crooks also use them to infect mobile devices, because they can be made into stickers and slapped on top of legitimate codes. Victims think the scanned app will be helpful, but its really data-stealing malware. Anna Chung, a threat researcher for Palo Alto Networks, told me this week that crooks are taking more interest in QR codes. Thats because theyre being used more by legitimate businesses as a result of COVID-19. For example, restaurants and stores use them as an aid to virus contact tracing. Rather than have someone take down your name when you enter a store so you can be called if a customer tests positive for the virus, you scan the code. It takes your smartphone number. Or restaurant customers are asked to scan a code to access menus and order food from their mobile device. Chung offers this advice for protection: Install an anti-malware app for mobile devices that has QR code protection. Disable the automatic redirect capability in your mobile browser. That way instead of automatically going to where the scanned code wants, the browser will first tell you which website its going to. Ignore invitations to scan a QR code for free internet. And be careful about the codes you scan. Stay away from codes on walls or windows. Beware of codes that look like theyre made from a sticker.

I have another warning to smartphone owners to be careful choosing and downloading mobile apps. This comes after an Italian cybersecurity company called Cleafy discovered new Android malware apps whose goal is to steal passwords to bank accounts. This malware hides in apps like media players and package trackers from well-known couriers like UPS and DHL If downloaded by a victim it asks to be installed as an Android Service. Thats a warning sign. Android Services run in the background. Why would you want an app to run in the background? Other suspicious signs: The app asks for permission to observe your actions, to retrieve window content and to perform gestures. If you say yes to all of these things and the app can silently take screenshots of whatever you do, such as enter passwords. If there is no way to say no to an app when it asks for access permissions, thats another sign of a malicious app. Finally, if after you download an app you cant find its icon, for sure youve been hacked. This campaign so far is aimed at stealing passwords for banks in Europe. It probably wont be long before it goes after banks in Canada and the U.S.

Dont download apps sent to you. Only rely on Android apps from the Google Play store. Even then bad apps can sneak in. If the app you choose starts demanding permission to things you dont want, delete the app.

Finally, yesterday was the monthly Microsoft Patch Tuesday. Check that Windows has installed the latest security updates. Also check your Adobe Reader is patched. And Google has updated the Chrome browser with security fixes.

Thats it for now. Remember links to details about these stories are in the text version of this podcast at ITWorldCanada.com. Thats where youll also find other cybersecurity stories of mine.

Follow Cyber Security Today on Apple Podcasts, Google Podcasts or add us to your Flash Briefing on your smart speaker.

Read the original post:
Cyber Security Today, May 12, 2021 - Hate on messaging apps, Zix used in scams and QR code warning - IT World Canada

Read More..

Ransomware: How the NHS learned the lessons of WannaCry to protect hospitals from attack – ZDNet

Four years ago, the UK's National Health Service suddenly found itself one of the most high profile victims of a global cyber attack.

On 12 May 2017, WannaCry ransomware hit organisations around the world, but hospitals and GP surgeries throughout England and Scotland were particularly badly affected. A significant number of services were disrupted as malware encrypted computers used by NHS trusts, forcing thousands of appointments to be cancelled and ambulances to be rerouted.

Wannacry was launched by North Korea which used EternalBlue, a leaked NSA hacking tool, to spread as far and wide as possible and it just so happened that many NHS Trusts were running Windows machines which had yet to receive the critical security patch released by Microsoft earlier.

It was and still is the largest cyber attack to hit the UK to date and even if the NHS wasn't actually a specific target of WannaCry it was a wakeup call at to how ransomware and other cyber campaigns could be a risk to an organisation with 1.5 million employees which provides healthcare services across the entire country.

WannaCry happened before ransomware rose to become the significant cybersecurity issue it is today and the NHS and National Cyber Security Centre know that if another ransomware campaign infiltrated the network, the impact could be devastating particularly during the Covid-19 pandemic.

"For the NHS, ransomware remains one of our biggest concerns," said Ian McCormack, deputy director for government, NCSC, speaking during a panel discussion at the NCSC's CYBERUK 21 virtual conference.

"Ransomware packages have got much more sophisticated, ransomware is becoming much slicker in terms of how it's developed".

SEE:Network security policy(TechRepublic Premium)

To protect networks from ransomware attacks, the NHS has learned the lessons from WannaCry and is aiming to ensure that it's harder for cyber criminals to exploit vulnerabilities in order to distribute malware.

One of those lessons is making NHS Trusts aware about newly disclosed security vulnerabilities and, if needed, providing support in order to apply the relevant patches.

The NHS trusts which had applied the critical Microsoft update to patch EternalBlue avoided falling victim to WannaCry so it's hoped that by providing the resources to enable patch management, networks can be protected against future attacks which attempt to exploit new vulnerabilities.

"Within NHS Digital and working closely with NHSX and NCSC, we offer a high severity alerts process, so we will review and triage vulnerabilities," said Neil Bennett, chief information security officer (CISO)at NHS Digital, the national IT provider for the NHS.

"And where we believe vulnerabilities are particularly critical and applicable to the NHS, we'll push out alerts advising organisations to take action to remediate and put time scales around it".

Recent vulnerabilities NHS Digital has helped hospitals and GP surgeries protect their networks against include zero-day vulnerabilities Microsoft Exchange server, plus TCP/IP vulnerabilities discovered in millions of Internet of Things devices.

If abused, both could enable cyber attacks to take control of machines and gain wider access to networks, helping lay the groundwork for additional attacks so NHS Digital was keen to ensure the patches were applied.

"We've encouraged organisations to move at pace and when needed, offer support," said Bennett.

But there's more to protecting against a ransomware attack than just applying the correct security patches and a lot of effort has gone into ensuring there are backups for NHS systems across the country.

That means if the worst happens and somehow a network did fall victim to a ransomware attack, it's possible to restore the network from a recent point, without having to consider paying a ransom to cyber criminals.

"Backups was a very key area of focus for us," said Bennett, who described how in some cases, that has meant new backup systems entirely.

"We provided support to individual trusts on reviewing their backups, very much aligned with the NCSC's backup guidance. Then with the findings we'd support the organisations remediating against recommendations and in some cases NHSX actually funded new backup solutions, ideally cloud-based backup solutions," he explained.

It's evident that cyber criminals will attempt to exploit any vulnerability they can in order to infect a network with ransomware or any other form of malware and it's hoped by regularly providing assistance with security patching and providing advice on backups, another WannaCry can be avoided, especially as cyber attacks against healthcare providers elsewhere have demonstrated how dangerous they can be.

"There's been numerous ransomware incidents around the world that have affected healthcare organisations in the US and France, for example and that shows that the health sector is certainly not immune to that threat," said McCormack.

MORE ON CYBERSECURITY

The rest is here:
Ransomware: How the NHS learned the lessons of WannaCry to protect hospitals from attack - ZDNet

Read More..

Cant eat the internet! Raab pledges 22m cyber security for vulnerable countries as 4bn cut from foreign aid – The London Economic

Dominic Raab has announced 22 million worth of investment to bolster cyber security capabilities in developing countries as he warned hostile state actors and criminal gangs are using technology to undermine democracy.

It comes as NGOs dismissed a claims by U.K.Foreign Secretary Dominic Raab thatno one is going hungry because we havent signed checks as shocking and simply not true. Following the economic shock of the coronavirus crisis,the chancellor cut the foreign aid budgetfrom 0.7% to 0.5% of total national income a reduction of around 4bn.

Cuts to humanitarian aid by the UK are a tragic blow for many of the worlds most marginalised people, 200 charities said in a joint statement, in April.

Organisations including Save the Children and Oxfam said humanitarian assistance was being reduced by more than 500m.

While the UK will continue cutting aid through 2021, Joe Biden announced this month an increase of $5.4bn (3.9bn) or 10% for USAid, the US governments international development agency.

Aid organizations are still grappling with funding uncertainty despite being weeks into a new financial year and have said they are gravely concerned about the impact their programs will feel from U.K. aid cuts, including in Syria, Yemen, and the Democratic Republic of Congo.

Jean-Michel Grand Action Against Hungers Grandsaid Raabs comment was simply not true. He wrote: Right now in the DRC, 27 million people are going hungry, and 22 days into the new financial year our teams are still waiting for assurances on their funding. Health centres will close. Lives will be lost.

The co-founder and co-CEO of Purposeful in Sierra Leone said: The timing of this is terrible not just because of the G7 and pandemic. It will means thousands of girls will not have access to life-saving sexual reproductive facilities. It undercuts the UKs moral authority. These are political cuts.

The Foreign Secretary told the CyberUK conference that authoritarian regimes including North Korea, Iran, Russia and China are using digital technology to sabotage and steal, or to control and censor.

Speaking at the conference on Wednesday, Mr Raab urged for international law to be respected in cyber space and concluded there is a need to clarify how rules around online activity are enforced.

The UK, jointly with Interpol, will set up a new cyber operations hub in Africa working across Ethiopia, Ghana, Kenya, Nigeria and Rwanda to support joint operations against cyber crime.

In a speech four years on from the WannaCry ransomware attack, which hit the NHS and affected hospitals across England and Scotland, Mr Raab said cyber criminals now also acted as a threat to democracy.

He said: There is also a democratic dimension to the threats that we see because elections are now a prime target.

The Foreign Secretary referred to the UKs 2019 general election, which he said Russian actors attempted to interfere with, as well as multiple cyber attacks during the 2016 and 2020 US elections.

In the last year alone, the National Cyber Security Centre dealt with 723 major cyber security incidents, the highest figure since the agency was formed five years ago, according to the Foreign Secretary.

Some of this activity is aimed at theft or extortion, but it all too often is simply focused on sabotage and disruption, he told the conference.

I think its worth saying these actors are the industrial-scale vandals of the 21st century.

These hostile state actors, the criminal gangs, they want to undermine the very foundations of our democracy.

Outlining the UKs strategy in dealing with such threats, including advice to businesses and families, Mr Raab said the efforts were starting to pay off as the nation made improvements in disrupting and deterring malicious activity.

We want to see international law respected in cyberspace, just as we would anywhere else, he told the conference.

We need to show how the rules apply to these changes in technology, the changes in threats, and the systematic attempts to render the internet a lawless space.

Our challenge is to clarify how those rules apply, how they are enforced, and guard against authoritarian regimes bending the principles to meet their own malicious ends.

The 22 million of new funding is set to support cyber capacity in vulnerable countries, particularly in Africa and the Indo-Pacific, which will go towards supporting cyber response teams and online safety awareness campaigns.

Mr Raab added: We can lead internationally in protecting the most vulnerable countries and at the same time bring together a wider coalition of countries to shape international rules that serve the common good.

Related: European Commission worried about EU citizens detained by UK

Since you are here, we wanted to ask for your help.

Journalism in Britain is under threat. The government is becoming increasingly authoritarian and our media is run by a handful of billionaires, most of whom reside overseas and all of them have strong political allegiances and financial motivations.

Our mission is to hold the powerful to account. It is vital that free media is allowed to exist to expose hypocrisy, corruption, wrongdoing and abuse of power. But we can't do it without you.

If you can afford to contribute a small donation to the site it will help us to continue our work in the best interests of the public. We only ask you to donate what you can afford, with an option to cancel your subscription at any point.

To donate or subscribe to The London Economic, click here.

The TLE shop is also now open, with all profits going to supporting our work.

The shop can be found here.

You can also SUBSCRIBE TO OUR NEWSLETTER .

Excerpt from:
Cant eat the internet! Raab pledges 22m cyber security for vulnerable countries as 4bn cut from foreign aid - The London Economic

Read More..

Hozon Auto teams up with cybersecurity giant | Automotive Industry News | just-auto – just-auto.com

Qihoo 360 CEO and founder Hongyi Zhou announced the plan to join the elite club of tech CEOs turned smart-carmakers at a company meeting on Tuesday. Household names such as Huawei, Xiaomi and Baidu have recently revealed similar plans to enter the automotive industry.

The Qihoo 360 announcement comes after Hozon Auto-owned EV brand Nezha announced in April that it planned to raise roughly 3 billion Chinese yuan (about $467m) in a Series D financing round. The cybertech firm is leading the raise and, according to Chinese media, is expected to become Hozon Auto's second largest shareholder at its close.

The formal confirmation of a partnership between the two companies was met with ample media attention in China, where the appetite for home-grown smart vehicles has grown in recent years.

Some, however, were left sceptical as Qihoo 360's core business is internet security, not vehicles. Zhou countered that, in his opinion, "not too many internet companies are building cars, but too few," adding that "without the help of the internet, car manufacturers would still follow traditional ways of thinking by replacing fuel tanks with batteries. Although this is a change in the industry, it cannot be considered a paradigm change."

Zhou explained that Qihoo 360 aims to take over the software development aspect for the vehicle's design. "For a good smart car, the hardware is the body and the software is the soul," the CEO said. "360 will use internet technology and the ideology of connectedness to transform Nezha's traditional car manufacturing model into a connected car model."

Qihoo 360 will likely bring its expertise in the area of cybersecurity into the smart car industry. Zhou said, "[Smart] cars are expected to become one of the largest players in the field of smart technology, and network security as well as the cybersecurity of connected vehicles will inevitably become an important aspect of 360's future strategy."

The fear of car hacking has become a growing concern among connected car owners. Last month, cyber insurer HSB found that a third of smart vehicle owners are worried that their cars will be hacked.

Qihoo 360 is the largest provider of internet and mobile security products in China. According to GlobalData's companies database, its product portfolio includes security guards, mobile guards, safe browsers, antivirus software and mobile assistants. It also provides entertainment services, loan navigation and credit guards.

The company previously owned a research group dedicated to automobile security, 360 Sky-Go Team, which carried out cooperations with several mainstream auto brands like Mercedes-Benz and BYD.

Qihoo 360 and Hozon Auto will also jointly set up a research centre in Beijing to promote the smart car revolution.

Continue reading here:
Hozon Auto teams up with cybersecurity giant | Automotive Industry News | just-auto - just-auto.com

Read More..