Page 1,942«..1020..1,9411,9421,9431,944..1,9501,960..»

India’s new VPN policy delayed by 3 months but major providers are already planning to leave – Firstpost

FP StaffJun 29, 2022 12:58:27 IST

A few weeks back, Indias Computer Emergency Response Team or CERT-In and the Ministry of Electronics and Information Technology or MeitY had decided to implement new cybersecurity regulations.

These regulations were meant for VPNs and cloud storage service providers and required them to store all user-centric data on their servers for a period of five years. They were also required to share this data with regulatory and law enforcement agencies during investigations.

Although the new regulations were set to take effect from June 27, after much backlash CERT-In has decided to delay the new VPN policy by three months and push the timeline by a period of 3 months.

CERT-in has now announced that the new VPN policy will now come into effect, starting September 25, thus, giving VPN providers more time to comply with the new rules. However, there would be no changes to the policies themselves.

In a statement, CERT-IN has stated, MeitY and CERT-In are in receipt of requests for the extension of timelines for implementation of these Cyber Security Directions of 28th April 2022 in respect of Micro, Small, and Medium Enterprises (MSMEs). Further, additional time has been sought for the implementation of a mechanism for validation of subscribers/customers by Data Centres, Virtual Private Server (VPS) providers, Cloud Service providers, and Virtual Private Network Service (VPN Service) providers.

Meanwhile, a number of VPN and cloud service providers have already exited the Indian market and many more are contemplating moving their servers out of India.

ExpressVPN and SurfShark VPN, two of the most widely used VPN services had recently announced to remove their servers from India as they could not comply with the new VPN rules. Although both of them will continue to work for Indian users via virtual servers in India, they will not be hosting any servers in the country.

The new regulations, required VPN and cloud service providers to store user data like their names, IP addresses, email addresses, and phone numbers for at least five years, even if the user does not continue with their services. Furthermore, ISPs and all data centres, including the ones that VPN services use are required to keep a log of all activities from an ISP for a period of 180 days, for national security and cybersecurity purposes.

Because this flies straight into the face of any VPNs terms of services and their basic agenda, VPN providers who host their servers in India are faced with a dilemma. That is why most of these providers are planning to shift their servers out of India and into other areas that are safe havens.

Read the original here:
India's new VPN policy delayed by 3 months but major providers are already planning to leave - Firstpost

Read More..

Supervision is key to promoting use of technology among children Richmond Agyemang Junior – BusinessGhana

In todays digital era, supervision and parental guidance are key to ensuring the safe use of technology among children. This will ensure that the quality of education is not affected says, Mr. Richmond Agyemang Junior, Senior Tutor at Nkawie Senior Secondary School and PR Practitioner.

Blocking certain site entries and bad POP ups to prevent children from viewing inappropriate content and strict parental guidance on internet usage should not be taken for granted as we embrace the use of technology in our education system.

Mr. Agyemang made the statement when he was speaking on the EdTech Monday segment on Citi Breakfast Show on the topic Harnessing the power of technology to create resilient educational systems. The EdTech Mondays, an initiative to leverage technology to advance education and learning championed by Mastercard Foundation in partnership with MEST.

He further said that to help bridge the technology access gap for children in rural and hard-to-reach communities, open education resources can be adopted which can be used for learning and teaching online and offline when hosted on the cloud servers.

He urged the Education Ministry to keep updating the iBOX it has secured to meet the everyday needs of our students and have a habit of maintenance on the platform for Ghanaian students to still benefit in the long run.

In addition, he reiterated the importance of leveraging technology to enhance teaching through visualizations and presentations, saying, Some students are visual learners and by employing tools which enable them to display different types of information, they will be able to retain the content in a more authentic and meaningful way such as Infographics.

Read more from the original source:
Supervision is key to promoting use of technology among children Richmond Agyemang Junior - BusinessGhana

Read More..

Zubair arrest: Can a journalist be forced to hand over his electronic devices to the police? – Scroll.in

While remanding Mohammed Zubair to four-day police custody on Tuesday, the chief metropolitan magistrate Snigdha Sarvaria noted that the journalist has not cooperated with the investigation agencies and ordered the police to retrieve the electronic devices a mobile and a laptop he had used to post a purportedly offensive tweet.

The tweet for which Zubair was arrested by the Delhi Police on Monday is from 2018. It contained a still from a 1983 Hindi movie of a signboard that once read Honeymoon Hotel repainted to read Hanuman Hotel. An anonymous Twitter user with the handle @balajikijaiin alleged the tweet hurt Hindu sentiments. (The account has since disappeared).

Vrinda Grover, Zubairs lawyer, had objected to the police seizing the journalists devices. She argued that the police already had Zubairs current phone. Further, she said that a journalists laptop, like a lawyers, has sensitive material related to their work along with personal information. This, she said, was an attempt to conduct a fishing inquiry beyond the scope of the present case.

However, the court ordered the police to take custody of Zubairs electronic devices without explicitly commenting on this argument.

Zubairs case highlights a legal paradox: even though the Indian Constitution recognises the right against self-incrimination as well as the right to privacy as fundamental rights, in several instances, courts have allowed law enforcement agencies to take custody of a persons electronic devices.

While the law shields certain communications from being forcibly disclosed, such as communication between spouses and those between lawyers and clients, this protection does not extend to journalists and their sources.

The police have been given powers under the Code of Criminal Procedure, 1973, to seize and search devices, such as mobile phones and laptops, that are necessary for an investigation.

At the same time, Article 20(3) of the Indian Constitution says that no person accused of any offence can be compelled to be a witness against themselves. The Supreme Court has interpreted this provision to mean that while a person cannot be forced to either give testimony against themselves or take polygraph tests, they can be forced to give physical evidence, such as fingerprints or handwriting samples. The restrictions aim at limiting the extraction of personal knowledge.

The rationale is that physical evidence is neutral and needs to be compared with some other material to impute culpability. However, testimonies are incriminating by themselves.

Relying on this logic, two High Courts have recently allowed investigating agencies to take the custody of an accused persons electronic devices.

In March of 2021, the Karnataka High Court held that compelling someone to give the mobile password or their biometrics to unlock a device would not infringe Article 20(3) since it is the nature of a direction to produce a document. Merely providing access to smartphones or emails would not amount to self-incrimination as the investigating agency will have to prove the allegations using other evidence.

In January, relying on this judgment, the Kerala High Court also upheld an investigating agencys right to forcibly access an accused persons phone.

The Karnataka High Court also said that in case the accused does not co-operate, an adverse inference could be drawn against them. It also said that an investigating agency is at the liberty to get backdoor access in case of non-cooperation by the accused.

It also reiterated the legal position that if a search is done without following the procedure, it might be illegal. However, such illegality would not make any seizures made during these searches inadmissible. But courts must be cautious while dealing with evidence collected from illegal searches, it added.

The right to privacy has been held to be a fundamental right by the Supreme Court. Thus any infringement of that right requires a few conditions to be met. The restriction must be lawful and have a legitimate state interest. It should not be disproportionate to the purpose of the law and must have a rational connection with the objective the state wants to achieve.

While compelling an accused to give their mobile phones or laptops, which contain a trove of information, can lead to encroachment on their right to privacy, this argument has been denied by the courts.

For example, the Karnataka High Court in its March judgment said that giving investigating agencies access to devices or emails for investigation would not infringe their privacy.

It reasoned that investigating crime is a legitimate state aim and asking the accused to merely disclose the password to their device is proportionate and has a causal link with the objective the state seeks to achieve.

The court acknowledged that access to phones and laptops gives the investigating officer free access to all the data not only on the equipment but on the cloud servers as well, which may include personal and privileged communication. But it said that using such data during an investigation falls within the exceptions to the right to privacy and its disclosure to third parties will be determined by the court.

Several legal commentators believe that these decisions wrongly interpret the existing law. The judgments of the Karnataka and Kerala High Courts are particularly concerning, legal commentator Gautam Bhatia wrote, because at a time when mobile phones are becoming more and more an extension of our interior lives rather than simple accessories, criminal procedure law should be moving towards greater protection of mobile phone data rather than a position where the State has free access to it.

However, other courts may arrive at a different conclusion.The Karnataka and Kerala High Courts judgments are the only ones we have on this issue, criminal lawyer Abhinav Sekhri said. However, the legal position on compelling the accused to give electronic devices is up for challenge as other High Courts, such as Delhi, have similar issues pending.

Zubairs lawyers also argued that a journalists digital devices hold a lot of sensitive information and should not be confiscated. However, in India, journalists do not enjoy higher freedom of expression or privacy as compared to other citizens. Nor do they get protection from disclosing their sources.

While Section 15(2) of the Press Council of India Act, 1978 says that no journalist can be compelled to disclose their sources, this protection applies only to proceedings before the council.

Although the Indian Evidence Act, 1872 gives protection from disclosure of certain communications, it does not protect journalists. For instance, Section 122 says that spouses cannot be compelled to disclose communication made during the marriage, whereas Section 126 accords similar protection to lawyers for their professional communication. However, journalists do not have any such privileges.

In 1983, the Law Commission of India recommended inserting a section specifically to give protection to journalists against revealing their sources. However, this has not been acted on.

While the courts have made some observations on protecting journalistic sources, they have not taken a definitive stance on the matter. In the Pegasus case, where a military-grade spyware was used to snoop on journalists, activists and intellectuals, the Supreme Court noted, Protection of journalistic sources is one of the basic conditions for the freedom of the press. Without such protection, sources may be deterred from assisting the press in informing the public on matters of public interest. However, no action has been taken in the case even eight months after the court made these observations and formed a committee to investigate.

On the contrary, in some instances, the courts have asked journalists to disclose their sources. In 2020, Asif Tanha, an accused in the 2020 Delhi riots, had alleged that Delhi Police had leaked his confession to media houses. When Delhi Police denied this allegation, the Delhi High Court asked Zee News to file an affidavit disclosing its source.

Continue reading here:
Zubair arrest: Can a journalist be forced to hand over his electronic devices to the police? - Scroll.in

Read More..

How to Make Teachers Informed Consumers of Artificial Intelligence – Market Brief – EdWeek

New Orleans Artificial intelligences place in schools may be poised to grow, but school districts and companies have a long way to go before teachers buy into the concept.

At a session on the future of AI in school districts, held at the ISTE conference this week, a panel of leaders discussed its potential to shape classroom experiences and the many unresolved questions associated with the technology.

The mention of AI can intimidate teachers asits so often associated with complex code and sophisticated robotics. But AI is already a part of daily life in the way our phones recommend content to us or the ways that our smart home technology responds to our requests.

When AI is made relatable, thats when teachers buy into it, opening doors for successful implementation in the classroom, panelists said.

AI sounds so exotic right now, but it wasnt that long ago that even computer science in classrooms was blowing our minds, said Joseph South, chief learning officer for ISTE. South is a former director of the office of educational technology at the U.S. Department of Education.It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.Nneka McGee, South San Antonio Independent School District

The first step in getting educators comfortable with AI is to provide them the support to understand it, said Nancye Blair Black, ISTEs AI Explorations project lead, who moderated the panel. That kind of support needs to come from many sources, from federal officials down to the state level and individual districts.

We need to be talking about, What is AI? and it needs to be explained, she said. A lot of people think AI is magic, but we just need to understand these tools and their limitations and do more research to get people on board.

With the use of machine learning, AI technologies can adapt to individual students needs in real-time, tracking their progress and providing immediate feedback and data to teachers as well.

In instances where a student may be rushing through answering questions, AI technology can pick up on that and flag the student to slow down, the speakers said. This can provide a level of individual attention that cant be achieved by a teacher whos expected to be looking over every students shoulder simultaneously.

Others see reasons to be wary of AIs potential impact on teaching and learning. Many ed-tech advocates and academic researchers have raised serious concerns that the technology could have a negative impact on students.

One longstanding worry is that the data AI systems rely on can be inaccurate or even discriminatory, and that the algorithms put into AI programs make faulty assumptions about students and their educational interests and potential.

For instance, if AI is used to influence decisions about which lessons or academic programs students have access to, it could end up scuttling students opportunities, rather than enhancing them.

Nneka McGee, executive director for learning and innovation for the South San Antonio ISD, mentioned in the ISTE panel that a lot more research still has to be done on AI, regarding opportunity, data, and ethics.

Some districts that are more affluent will have more funding, so how do we provide opportunities for all students? she said.

We also need to look into the amount of data that is needed and collected for AI to run effectively. Your school will probably need a data- sharing agreement with the companies you work with.

A lot of research needs to be done on AIs data security and accessibility, as well as how to best integrate such technologies across the curriculum not just in STEM-focused courses.

Its important to start getting educators familiar with the AI and how it works, panelists said, because when used effectively, AI can increase student engagement in the classroom, and give teachers more time to customize lessons to individual student needs.

As AI picks up momentum within the education sphere, the speakers said that teachers need to start by learning the fundamentals of the technology and how it can be used in their classrooms.But a big share of the responsibilityalso falls on company officials developing new AI products, Black said.

When asked about advice for ed-tech organizations that are looking to expand into AI capabilities, Black emphasized the need for user-friendliness and an interface that can be seamlessly assimilated into existing curriculum and standards.

Hand [teachers] something they can use right away, not just another thing to pile on what they already have, she said.

McGee, of the South San Antonio ISD,urges companies to include teachers in every part of the process when it comes to pioneering AI.

Involve teachers because theyre on the front lines; theyre the first ones who see our students, she said. It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.

FollowEdWeek Market Briefon Twitter@EdMarketBriefor connect with us onLinkedIn.

Photo Credit: International Society for Technology in Education

See also:

Original post:
How to Make Teachers Informed Consumers of Artificial Intelligence - Market Brief - EdWeek

Read More..

Editing Videos On the Cloud Using Artificial Intelligence – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

VideoVerse was incepted to create exciting content with artificial intelligence technology and to make video editing democratic and accessible. The company's journey began in 2016 when Saket Dandotia, Alok Patil and Vinayak Shrivastav teamed up.

The trio wanted to create a technology that would disrupt the content industry. The solution they came up with was Magnifi, which along with Sytck and Illusto make up the ecosystem of VideoVerse.

"We truly believe that technology should help all content creators maximize their investments by not only telling better stories but also garnering a wider reach, seamless transition and efficient working solutions. We are constantly innovating to best suit consumer needs and industry demands," said Meghna Krishna, CRO, VideoVerse.

The company conducted market surveys and focused research on narrowing down the exact challenges it was solving for. The vision was to build a platform that allowed for accommodations and fine-tuning needed to suit every aspect of the production process as well as client requirements. The company created its platform by harnessing the power of AI and ML. It worked towards ensuring the application was precise and efficient. Sports was the first genre VideoVerse forayed into and the team researched over 30 key sports and parameters that could be meta-tagged to generate bite-sized videos.

"The urgent need for a technology solution to support the post-production processes and the demand for a solution that addressed every specific pain point in scaling content production became clear to us," added Krishna

Krishna believes that startups are the way forward for groundbreaking ideas and technologies to find a place in the enterprise world. There is tremendous scope for innovation and every new solution or idea only helps strengthen the community.

According to Forbes India, video creation and consumption space are growing at 24 per cent per annum and approximately 60 per cent of the internet users in India consume videos online.

Artificial Intelligence was a very new technology during VideoVerse's initial days which made it tougher to convince clients and investors. However, the company has raised $46.8 million in its recent Series B funding.

"There was a lot of ambiguity around the impact of AI and often the change from traditional methods to new age technology faces natural resistance. The challenge on hand was augmenting the existing awareness and educating end-users while ensuring that we had a seamless solution that did not disrupt the workflow," commented Krishna.

Videoverse and its distinct cloud-agnostic products use artificial intelligence (AI) and machine learning (ML) technology to revolutionize how content is refined and consumed. As far as specific stacks go:

For Magnifi, the key technologies used are face and image recognition, vision models, optical character recognition, audio detection and NLP. Styck and Illusto both use full-stack applications (MERN [Mongo, Express, React, Node]).

"Easy access to video editing platforms that offer state-of-the-art, next-generation solutions is the need of the hour. Being cloud-agnostic and powered by AI and ML all our platforms have a great user interface that allows anyone to master the art of video creation. There is a growing need for social-optimized content and our products are geared towards providing that with one-click solutions," added Krishna.

The company's focus is to strengthen the team, further enhance the product features and offer a complete holistic solution to its clients for all their video editing needs. VideoVerse has offices in the U.S., Europe, Israel, and India and is expanding to new markets like Singapore and the Middle East.

See the article here:
Editing Videos On the Cloud Using Artificial Intelligence - Entrepreneur

Read More..

Artificial Intelligences Environmental Costs and Promise – Council on Foreign Relations

Artificial intelligence (AI) is often presented in binary terms in both popular culture and political analysis. Either it represents the key to a futuristic utopia defined by the integration of human intelligence and technological prowess, or it is the first step toward a dystopian rise of machines. This same binary thinking is practiced by academics, entrepreneurs, and even activists in relation to the application of AI in combating climate change. The technology industrys singular focus on AIs role in creating a new technological utopia obscures the ways that AI can exacerbate environmental degradation, often in ways that directly harm marginalized populations. In order to utilize AI in fighting climate change in a way that both embraces its technological promise and acknowledges its heavy energy use, the technology companies leading the AI charge need to explore solutions to the environmental impacts of AI.

AI can be a powerful tool to fight climate change. AI self-driving cars, for instance, may reduce emissions by 50 percent by 2050 by identifying the most efficient routes. Employing AI in agriculture produces higher yields; peanut farmers in India achieved a 30 percent larger harvest by using AI technology. In addition, AI can provide faster and more accurate analysis of satellite images that identify disaster-stricken areas in need of assistance or rainforest destruction. AI-driven data analysis can also help predict hazardous weather patterns and increase accountability by precisely monitoring whether governments and companies are sticking to their emissions targets.

More on:

Technology and Innovation

Robots and Artificial Intelligence

Energy and Environment

Yet AI and the broader internet and communications industry have increasingly come under fire for using exorbitant amounts of energy. Take data processing, for example. The supercomputers used to run cutting-edge AI programs are powered by the public electricity grid and supported by back up diesel-powered generators. Training a single AI system can emit over 250,000 pounds of carbon dioxide. In fact, the use of AI technology across all sectors produces carbon dioxide emissions at a level comparable to the aviation industry. These additional emissions disproportionately impact historically marginalized communities who often live in heavily polluted areas and are more directly affected by the health hazards of pollution.

Net Politics

CFR experts investigate the impact of information and communication technologies on security, privacy, and international affairs.2-4 times weekly.

Digital and Cyberspace Policy program updates on cybersecurity, digital trade, internet governance, and online privacy.Bimonthly.

A summary of global news developments with CFR analysis delivered to your inbox each morning.Most weekdays.

A weekly digest of the latestfrom CFR on the biggest foreign policy stories of the week, featuring briefs, opinions, and explainers. Every Friday.

Recently, AI scientists and engineers have responded to these critiques and are considering new sources for powering data farms. However, even new, ostensibly more sustainable energy sources such as rechargeable batteries can exacerbate climate change and harm communities. Most rechargeable batteries are built using lithium, a rare earth metal whose extraction can have negative effects for marginalized communities. Lithium extraction, which is fueled by an increasing demand for cleaner energy sources, demands enormous water usage, to the tune of 500,000 gallons of water for every ton of lithium extracted. In Chile, the second largest producer of lithium in the world, indigenous communities like the Copiap people in the North often clash with mining companies over land and water rights. These mining activities are so water intensive, the Institute for Energy Research reports that in Salar de Atacama they consumed 65 percent of the regions water. This water loss damages and permanently depletes wetlands and water sources, which has caused native species of flora and fauna to become endangered and affected local populations. Portraying lithium as clean energy simply because it is less environmentally disastrous than diesel or coal is a false dichotomy, which discourages stakeholders from pursuing newer, greener energy sources.

The development of artificial intelligence technology is a symbol of incredible progress; however, progress is not one size fits all, and the companies developing these technologies have a responsibility to ensure that marginalized communities do not bear the brunt of the negative side effects of the AI revolution.

Some data farms have shifted to running entirely on clean energy. Icelands data farms, for example, largely run on clean energy powered by the islands hydroelectric and geothermal resources, and the country has become a popular location for new data centers. These data centers also dont need to be cooled by energy-intensive fans or air conditioningIcelands cold climate does the trick. However, Iceland is particularly well suited to hosting data processing centers, and most countries arent able to replicate the unique environmental conditions.

Large data companies can avoid the pitfalls of lithium batteries by using physical batteries. Made of concrete, these batteries store gravitational potential energy in elevated concrete blocks which can then be harnessed at any point. This isnt some far off ideain a Swiss valley two 35 ton concrete blocks are suspended by a 246 foot tower. These are an early prototype of what a physical battery could look like, and together, they hold enough energy to power two thousand homes (two megawatts). Physical batteries are a potential alternative to lithium batteries with a lower cost to the environment and marginalized communities, and which could be built from commonly available materials, such as concrete.

More on:

Technology and Innovation

Robots and Artificial Intelligence

Energy and Environment

The U.S. government, through the Department of Energy and the Defense Advanced Research Projects Agency (DARPA), has invested billions of dollars in improving lithium batteries, especially by creating solid-state lithium ion batteries, which could provide better safety, energy density, and lifespan compared to traditional lithium ion batteries. Some private companies have made commitments to expand their use of lithium ion technology in their facilities, including Google, which has created a pilot program to phase out diesel generators at some data centers and replace them with lithium ion batteries. These investments are not enough, especially at a time when electric vehicle manufacturers and the U.S. government are making multi-billion dollar investments in new kinds of batteries. Technology companies need to do more to help solve the energy use and storage issues posed by AI.

AI presents a number of advantages for solving the current climate crisis, but the potential environmental side effects are hard to ignore. Technology companies have often been lauded for their creativity and ingenuity, and they need to apply these skills to solve the problems associated with artificial intelligence.

Elsabet Jones is an Assistant Editor in the Council on Foreign Relations Education Department.

Baylee Easterday is a Program Associate with World Learning and a former intern in the Council on Foreign Relations Education Department.

Visit link:
Artificial Intelligences Environmental Costs and Promise - Council on Foreign Relations

Read More..

The Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw – WIRED

In 2019, guards on the borders of Greece, Hungary, and Latvia began testing an artificial-intelligence-powered lie detector. The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of research at Manchester Metropolitan University, in the UK.

The trial sparked controversy. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its lie-prediction algorithm didnt work, and the projects own website acknowledged that the technology may imply risks for fundamental human rights.

This month, Silent Talker, a company spun out of Manchester Met that made the technology underlying iBorderCtrl, dissolved. But thats not the end of the story. Lawyers, activists, and lawmakers are pushing for a European Union law to regulate AI, which would ban systems that claim to detect human deception in migrationciting iBorderCtrl as an example of what can go wrong. Former Silent Talker executives could not be reached for comment.

A ban on AI lie detectors at borders is one of thousands of amendments to the AI Act being considered by officials from EU nations and members of the European Parliament. The legislation is intended to protect EU citizens fundamental rights, like the right to live free from discrimination or to declare asylum. It labels some use cases of AI high-risk, some low-risk, and slaps an outright ban on others. Those lobbying to change the AI Act include human rights groups, trade unions, and companies like Google and Microsoft, which want the AI Act to draw a distinction between those who make general-purpose AI systems, and those who deploy them for specific uses.

Last month, advocacy groups including European Digital Rights and the Platform for International Cooperation on Undocumented Migrants called for the act to ban the use of AI polygraphs that measure things like eye movement, tone of voice, or facial expression at borders. Statewatch, a civil liberties nonprofit, released an analysis warning that the AI Act as written would allow use of systems like iBorderCtrl, adding to Europes existing publicly funded border AI ecosystem. The analysis calculated that over the past two decades, roughly half of the 341 million ($356 million) in funding for use of AI at the border, such as profiling migrants, went to private companies.

The use of AI lie detectors on borders effectively creates new immigration policy through technology, says Petra Molnar, associate director of the nonprofit Refugee Law Lab, labeling everyone as suspicious. You have to prove that you are a refugee, and you're assumed to be a liar unless proven otherwise, she says. That logic underpins everything. It underpins AI lie detectors, and it underpins more surveillance and pushback at borders.

Molnar, an immigration lawyer, says people often avoid eye contact with border or migration officials for innocuous reasonssuch as culture, religion, or traumabut doing so is sometimes misread as a signal a person is hiding something. Humans often struggle with cross-cultural communication or speaking to people who experienced trauma, she says, so why would people believe a machine can do better?

Visit link:
The Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw - WIRED

Read More..

Skills or jobs that won’t be replaced by Automation, Artificial Intelligence in the future – Economic Times

In the high-tech fast-changing world, the nature of work also keeps changing. In the last few decades computers, robots and automation have changed the nature and roles of almost every job. Automation and artificial intelligence are spurring a new revolution, transforming jobs in every industry from IT to manufacturing.

According to some studies, about one fourth of the jobs are at risk of being automated across the globe. This trend sometimes makes people nervous about job security.

Increased adoption and evolution of Automation & Artificial Intelligence brings along skepticism of a large number of roles and skills displacement. Instead, automation and AI should be used to evolve job roles and help make human workers more effective, Arjun Jolly, Principal, Athena Executive Search & Consulting said.

Here are some skills and professions that cant be easily replaced by automation.Jobs involving high levels of human interaction, strategic interpretation, critical decision making, niche skills or subject matter expertise won't be replaced by automation anytime soon. For instance - Lawyers, Leadership roles, Medical Professionals, Healthcare practitioners, IT & HR Professionals. We can automate almost every part of the contract workflow process, but will still continue to rely on human intervention to put arguments, establish social relations in the negotiation phase, and find nuances in the data, rather than relying on data and algorithms outright, Arjun Jolly said.

Human Resource, Customer relationship management: While alexa or siri are great at following your every direction, but they cant really understand how youre feeling. Even the most advanced technology will never be able to comprehend our emotions and respond in the way that a human can. Whether its a team leader helping employees through a difficult time, account managers working with clients, or hiring managers looking for the perfect candidate, you need empathy to get those jobs done.

Roles that involve building relationships with clients, customers or patients can never be replaced by automation.

Automation will continue to take on more operational functions like payroll, filtering of job applications etc. But the human touch will always remain when it comes to HR. Similarly, even in the healthcare sector, automation and technology are playing an important role. But these need to work alongside humans doctors, surgeons, nurses, healthcare workers for diagnosis and treatment, Rupali Kaul, Operational Head-West, Marching Sheep said.

"Besides this, Psychologists, caregivers, most engineers, human resource managers, marketing strategists, and lawyers are some roles that cannot be replaced by AI anytime in the near future, Nilesh Jahagirdar, VP Marketing, [x]cube LABS said.

Strategic, Critical ThinkingAutomation can remove or simplify the process of implementing tasks but it cant provide an overarching strategy that makes each task relevant. Even as the world moves towards digitization and automation, the ability to understand the context and complexities before offering solutions remains irreplaceable.

Automation can help implement tasks but its a long way from providing a strategy that makes each task relevant that fits in the bigger picture. Regardless of industry, roles that require strategic thinking will always be done by humans.

"So, jobs like solutions architect, designers, professionals providing hospitality services, and consultants having the ability to integrate systems and processes, would remain much in demand, IMHO. In essence, skills with the ability to provide superlative customer experiences would be the skills of the future, Ruchika Godha, COO, Advaiya said.

Creativity Most intelligent computers or robot cant paint like Picaso and create music like Mozart. Nobody can explain why some humans are more creative than others. So, its safe to say its near impossible for computers to replicate the spark of creativity that has led to the worlds most amazing feats.

"Automation is programmed and cannot replicate creativity which is spontaneous and requires imagination, dreaming and collective inspiration something humans are best at Rupali Kaul, Operational Head-West, Marching Sheep said.

Nilesh Jahagirdar, VP Marketing, [x]cube LABS said, While digital technologies such as AI/ML are making quite a few routine jobs redundant, there are some which cant quite be replaced owing to the complexities involved and the fact that AI evolution is not just as magical as people think it is. At its current state, its only repetitive tasks that follow the same rules over and over which can be done by AI. Psychologists, caregivers, most engineers, human resource managers, marketing strategists, and lawyers are some roles that cannot be replaced by AI anytime in the near future.

The rest is here:
Skills or jobs that won't be replaced by Automation, Artificial Intelligence in the future - Economic Times

Read More..

Use That Everyday A.I. in Your Pocket – The New York Times

Virtual assistants usually hog the spotlight when it comes to talk of artificial intelligence software on smartphones and tablets. But Apples Siri, Google Assistant, Samsungs Bixby and company arent the only tools using machine learning to make life easier other common programs use the technology, too. Heres a quick tour through some common A.I.-driven apps and how you can manage them.

When you set up a new device, youre usually invited to enroll in its facial recognition security program, which captures your image and analyzes it so the program will recognize you in different looks and lighting situations. Later, when you want to unlock the device or use apps like digital payment systems, the camera confirms that your face matches the stored data so you can proceed.

If you decide to use the feature, check your device makers privacy policy to see where that data is stored. For example, Apple states that Face ID data does not leave your device, and Google says it stores face data on the security chips on its Pixel phones. If you sign up and then have second thoughts, you can always go into your phones Face ID or Face Unlock settings, delete or reset the data, turn off the feature and stick with a passcode.

If youve ever been typing along on your phones keyboard and noticed suggested words for what you might type next, thats machine learning in action. Apples iOS software includes a predictive text function that bases its suggestions on your past conversations, Safari browser searches and other sources.

Googles Gboard keyboard for Android and iOS can offer word suggestions, and Google has a Smart Compose tool for Gmail and other text-entry apps that draws on personal information collected in your Google Account to tailor its word predictions. Samsung has its own predictive text software for its Galaxy devices.

The suggestions may save you time, and Apple and Google both state that the customized predictions based on your personal information remain private. Still, if youd like fewer algorithms in your business, turn it off. On an iPhone (or iPad), you can turn off Predictive Text in the Keyboard settings.

Google Lens (for Android and iOS) and Apples Live Text feature use artificial intelligence to analyze the text in images for automatic translation and can perform other helpful tasks like Apples visual look up. Google Lens can identify plants, animals and products seen through the phones camera, and these searches are saved. You can delete the information or turn off the data-gathering in the Web & App Activity settings in your Google Account.

In iOS 15, you can turn off Live Text by opening the Settings app, tapping General and then Language & Region and turning off the button for Live Text. Later this year, Live Text is getting an upgrade in iOS 16, in which Apple stresses the role of on-device intelligence in doing the work.

These A.I.-in-action tools are most useful when they have access to personal information like your address and contacts. If you have concerns, read your phone makers privacy policy: Apple, Google and Samsung all have documents posted in their sites. The nonprofit site Common Sense Media has posted independent privacy evaluations for Siri, Google Assistant and Bixby.

Setting up the software is straightforward because the assistant guides you, but check out the apps own settings to customize it. And dont forget the general privacy controls built into your phones operating system.

Read the original here:
Use That Everyday A.I. in Your Pocket - The New York Times

Read More..

Artificial intelligence: a new paradigm in the swine industry – Pig Progress

Machine learning is one of the artificial intelligence models frequently used for modeling, prediction, and management of swine farming. Machine learning models mainly include algorithms of a decision tree, clustering, a support vector machine, and the Markov chain model focused on disease detection, behaviour recognition for postural classification, and sound detection of animals. The researchers from North Carolina State University and Smithfield Premium Genetics* demonstrated the application of machine learning algorithms to estimate body weight in growing pigs from feeding behaviour and feed intake data.

Feed intake, feeder occupation time, and body weight information were collected from 655 pigs of 3 breeds (Duroc, Landrace, and Large White) from 75 to 166 days of age. 2 machine learning algorithms (long short-term memory network and random forest) were selected to forecast the body weight of pigs using 4 scenarios. Long short-term memory was used to accurately predict time series data due to its ability in learning and storing long term patterns in a sequence-dependent order and random forest approach was used as a representative algorithm in the machine learning space. The scenarios included an individually informed predictive scenario, an individually and group informed predictive scenario, a breed-specific individually and group informed predictive scenario, and a group informed predictive scenario. 4 models each implemented with 3 algorithms were constructed and trained by different subsets of data collected along the grow-finish period to predict the body weight of individuals or groups of pigs.

Overall, as pigs matured and gained weight, daily feed intake increased, while the daily number of visits and daily occupation time decreased. Overall, the individually informed predictive scenario achieved better predictive performances than the individually and group informed predictive scenarios in terms of correlation, accuracy, sensitivity, and specificity. The greatest correlation was 0.87, and the highest accuracy was 0.89 for the individually informed prediction, while they were 0.84 and 0.85 for the individually and group informed predictions, respectively. The effect of the addition of feeding behaviour and feed intake data varied across algorithms and scenarios from a small to moderate improvement in predictive performance.

This study demonstrated various roles of feeding behaviour and feed intake data in diverse predictive scenarios. The information collected from the period closest to the finishing stage was useful to achieve the best predictive performance across predictions. Artificial intelligence has the potential to connect feeding behaviour dynamics to body growth and to provide a promising picture of the feeding behaviour data involvement in the group-housed pigs body weight prediction. Artificial intelligence and machine learning can be used as management tools for swine farmers to evaluate and rank individual pigs to adjust feeding strategies during the growth period and to avoid sorting losses at the finishing stage while reducing labor and costs.

Some technologies and tools have been developed for data collection, data processing, and modeling algorithms to evaluate pigs feeding behaviour and feed intake. These technologies demonstrated great potential to enhance the swine industry efficiency on decision making. A standard database or method for data cleaning and selection is however required to minimise the time and costs of data processing.

* He Y, Tiezzi F, Howard J, Maltecca C. Predicting body weight in growing pigs from feeding behavior data using machine learning algorithms. Comput Electron Agric. 2021;184:106085. doi:10.1016/j.compag.2021.106085

Link:
Artificial intelligence: a new paradigm in the swine industry - Pig Progress

Read More..