Page 1,764«..1020..1,7631,7641,7651,766..1,7701,780..»

On the Talent Hunt: Work-life balance in the construction sector? Yes, it’s possible with some creativity – TODAY

I work for a mechanical and electrical engineering firm in the construction sector.

Even before the pandemic, it was a herculean task recruiting Singaporean workers given the long hours and nature of our jobs. The situation is now even more challenging amid a tight labour market.

While we managed to complete projects which were delayed during Covid-19, we also clinched new jobs that needed project engineers on site.

Without sufficient manpower, our existing staff would be stretched more thinly, and we had to decline participation in some tenders over the past two years.

Thus, recruitment and retention have become a main focus for the company.

With work-life harmony a buzz phrase in recent years, we started looking at how we can change some practices to be a more appealing employer.

Construction is not the first industry that comes to mind when one talks about work-life harmony. It is even harder for a medium-sized enterprise like us due to our limited resources.

For instance, we found it difficult to implement some flexible work arrangements (FWAs) such as work-from-home (WFH) as our work requires our staff to be on site.

We realised that we had to be creative to achieve better work-life harmony.

First, we started allowing staggered reporting times for our office-based staff, who can choose a shift that suits their family commitments.

Second, for staff who need to be on site, we stepped up digitalisation efforts with cloud servers and a human resources (HR) mobile application so that they can work remotely from site offices and other convenient locations.

Supervisors are also given autonomy to provide staff with time-off to attend to emergencies.

By 2024, a new set of tripartite guidelines will kick in for employers to consider staff FWA requests fairly.

Before this takes place, we have started considering WFH requests for reasons such as caregiving and when employees children are sick.

These requests are approved by supervisors and HR to ensure that business needs are still met.

We believe that with family-friendly policies, our employees will feel more valued and committed to the company. This in turn boosts productivity and staff morale.

This is also why we have joined the Governments Made for Families initiative to show our commitment towards families.

To further ease our manpower crunch, we work closely with the Building and Construction Authority to offer sponsorships and scholarships for students who will then join us under a bond after they graduate.

We recruited one graduate under the programme this year and are looking at hiring another.

We have also found that utilising digital tools such as Building Information Modelling, which incorporates 3D models to create and manage a construction project throughout its life cycle, help us attract young talents by showing that we are progressive.

With the easing of border restrictions, our manpower situation is definitely better now, but as we navigate these uncertain times, we will continue to adapt our strategies to ensure that our business and people goals can be met.

ABOUT THE WRITER:

Mrs Sarah Tham, 41, is an associate director of DLE M&E, a mechanical and electrical engineering firm founded in 1975. It currently has over 300 employees.

If you are a business owner with an experience to share or know someone who wishes to contribute to this series, write to voices [at] mediacorp.com.sg with your full name, address and phone number.

Here is the original post:
On the Talent Hunt: Work-life balance in the construction sector? Yes, it's possible with some creativity - TODAY

Read More..

Old computer technology points the way to future of quantum computing Terrace Standard – Terrace Standard

Researchers have made a breakthrough in quantum technology development that has the potential to leave todays supercomputers in the dust, opening the door to advances in fields including medicine, chemistry, cybersecurity and others that have been out of reach.

In a study published in the journal Nature on Wednesday, researchers from Simon Fraser University in British Columbia said they found a way to create quantum computing processors in silicon chips.

Principal investigator Stephanie Simmons said they illuminated tiny imperfections on the silicon chips with intense beams of light. The defects in the silicon chips act as a carrier of information, she said. While the rest of the chip transmits the light, the tiny defect reflects it back and turns into a messenger, she said.

There are many naturally occurring imperfections in silicon. Some of these imperfections can act as quantum bits, or qubits. Scientists call those kinds of imperfections spin qubits. Past research has shown that silicon can produce some of the most stable and long-lived qubits in the industry.

These results unlock immediate opportunities to construct silicon-integrated, telecommunications-band quantum information networks, said the study.

Simmons, who is the universitys Canada Research Chair in silicon quantum technologies, said the main challenge with quantum computing was being able to send information to and from qubits.

People have worked with spin qubits, or defects, in silicon before, Simmons said. And people have worked with photon qubits in silicon before. But nobodys brought them together like this.

Lead author Daniel Higginbottom called the breakthrough immediately promising because researchers achieved what was considered impossible by combining two known but parallel fields.

Silicon defects were extensively studied from the 1970s through the 90s while quantum physics has been researched for decades, said Higginbottom, who is a post-doctoral fellow at the universitys physics department.

For the longest time people didnt see any potential for optical technology in silicon defects. But weve really pioneered revisiting these and have found something with applications in quantum technology thats certainly remarkable.

Although in an embryonic stage, Simmons said quantum computing is the rock n roll future of computers that can solve anything from simple algebra problems to complex pharmaceutical equations or formulas that unlock deep mysteries of space.

Were going to be limited by our imaginations at this stage. Whats really going to take off is really far outside our predictive capabilities as humans.

The advantage of using silicon chips is that they are widely available, understood and have a giant manufacturing base, she said.

We can really get it working and we should be able to move more quickly and hopefully bring that capability mainstream much faster.

Some physicists predict quantum computers will become mainstream in about two decades, although Simmons said she thinks it will be much sooner.

In the 1950s, people thought the technology behind transistors was mainly going to be used for hearing aids, she said. No one then predicted that the physics behind a transistor could be applied to Facebook or Google, she added.

So, well have to see how quantum technology plays out over decades in terms of what applications really do resonate with the public, she said. But there is going to be a lot because people are creative, and these are fundamentally very powerful tools that were unlocking.

Hina Alam, The Canadian Press

RELATED: US intel warns China could dominate advanced technologies

RELATED: Elon Musk claims Tesla will debut a humanoid robot next year

Computers and ElectronicsScienceSFU

Go here to see the original:
Old computer technology points the way to future of quantum computing Terrace Standard - Terrace Standard

Read More..

Watch: How Abu Dhabi is ushering in a new era of computing with state-of-the-art quantum lab – Gulf News

Abu Dhabi: At the heart of Abu Dhabis science research hub in Masdar, a new era of computing is taking shape. With massive investments towards becoming a leader in the field, Abu Dhabi could well revolutionise quantum computing when a newly-developed foundry starts churning out quantum chips this summer.

With the world of computing still undecided on which platform works best to enable, and then scale up, quantum computing, chips manufactured at the laboratory will allow important experiments into the possibilities of various material and configurations.

Quantum foundry

The laboratory is part of the Quantum Research Centre, one of a number of research interests at the Technology Innovation Institute (TII), which focuses on applied research and is part of the over-arching Advanced Technology Research Council in Abu Dhabi.

TII Quantum Foundry will be the first quantum device fabrication facility in the UAE. At the moment, it is still under construction. We are installing the last of the tools needed to manufacture superconducting quantum chips. We are hoping that it will be ready soon, and hopefully by then, we can start manufacturing the first quantum chips in the UAE, Alvaro Orgaz, lead for the quantum computing control at the TIIs Quantum Research Centre, told Gulf News.

The design of quantum chips is an area of active research at the moment. We are also interested in this. So, we will manufacture our chips and install them into our quantum refrigerators, then test them and improve on each iteration of the chip, he explained.

What is quantum computing?

Classical computers process information in bits, tiny on and off switches that are encoded in zeroes and ones. In contrast, quantum computing uses qubits as the fundamental unit of information.

Unlike classical bits, qubits can take advantage of a quantum mechanical effect called superposition where they exist as 1 and 0 at the same time. One qubit cannot always be described independently of the state of the others either, in a phenomenon called entanglement. The capacity of a quantum computer increases exponentially with the number of qubits. The efficient usage of quantum entanglement drastically enhances the capacity of a quantum computer to be able to deal with challenging problems, explained Professor Dr Jos Ignacio Latorre, chief researcher at the Quantum Research Center.

Why quantum computing?

When quantum computers were first proposed in the 1980s and 1990s, the aim was to help computing for certain complex systems such as molecules that cannot be accurately depicted with classical algorithms.

Quantum effects translate well to complex computations in some fields like pharmaceuticals, material sciences, as well as optimisation processes that are important in aviation, oil and gas, the energy sector and the financial sector. In a classical computer, you can have one configuration of zeroes and ones or another. But in a quantum system, you can have many configurations of zeroes and ones processed simultaneously in a superposition state. This is the fundamental reason why quantum computers can solve some complex computational tasks more efficiently than classical computers, said Dr Leandro Aolita, executive director of quantum algorithms at the Quantum Research Centre.

Complementing classical computing

On a basic level, this means that quantum computers will not replace classical computers; they will complement them.

There are some computational problems in which quantum computers will offer no speed-up. There are only some problems where they will be superior. So, you would not use a quantum computer which is designed for high-performance computing to write an email, the researcher explained. This is why, in addition to research, the TII is also working with industry partners to see which computational problems may translate well to quantum computing and the speed-up this may provide, once the computers are mature enough to process them.

Quantum effect fragility

At this stage, the simplest quantum computer is already operational at the QRC laboratory in Masdar City. This includes two superconducting qubit chips mounted in refrigerators at the laboratory, even though quantum systems can be created on a number of different platforms.

Here, the super conducting qubit chip is in a cooler that takes the system down to a temperature that goes down to around 10 millikelvin, which is even cooler than the temperature of outer space. You have to isolate the system from the thermal environment, but you also need to be able to insert cables to control and read the qubits. This is the most difficult challenge from an engineering and a technological perspective, especially when you scale up to a million qubits because quantum effects are so fragile. No one knows exactly the exact geometric configurations to minimise the thermal fluctuations and the noise, [and this is one of the things that testing will look into once we manufacture different iterations of quantum chip], Dr Aolita explained.

Qubit quality

The quality of the qubit is also very important, which boils down to the manufacture of a chip with superconducting current that displays quantum effects. The chips at TII are barely 2x10 millimetres in size, and at their centre is a tiny circuit known as the Josephson junction that enables the control of quantum elements.

It is also not just a matter of how many qubits you have, as the quality of the qubits matters. So, you need to have particles that preserve their quantum superposition, you need to be able to control them, have them interact the way you want, and read their state, but you also have to isolate them from the noise of the environment, he said.

Optimistic timeline

Despite these massive challenges to perfect a minute chip, Dr Aolita was also quite hopeful about the work being accomplished at TII, including discussions with industry about the possible applications of quantum computing.

I think we could see some useful quantum advantages in terms of classical computing power in three to five years, he said. [Right now], we have ideas, theories, preliminary experiments and even some prototypes. Quantum computers even exist, but they are small and not still able to outperform classical supercomputers. But this was the case with classical computing too. In the 1950s and 1940s, a computer was like an entire gym or vault. Then the transistor arrived, which revolutionised the field and miniaturised computers to much smaller regions of space that were also faster. Something similar could happen here and it really is a matter of finding which kind of qubit to use and this could ease the process a lot. My prediction for a timeline is optimistic, but not exaggerated, the researcher added.

Science research

Apart from the techonological breakthroughs, the QRCs efforts are likely to also improve Abu Dhabis status as a hub for science and research.

The UAE has a long tradition of adopting technologies and incorporating technologies bought from abroad. This is now [different in] that the government is putting a serious stake in creating and producing this technology and this creates a multiplicative effect in that young people get more enthusiastic about scientific careers. This creates more demand for universities to start new careers in physics, engineering, computer science, mathematics. This [will essentially have] a long-term, multiplicative effect on development, independent of the concrete goal or technical result of the project on the scientific environment in the country, Dr Aolita added.

The QRC team currently includes 45 people, but this will grow to 60 by the end of 2022, and perhaps to 80 people in 2023. We also want to prioritise hiring the top talent from across the world, Dr Aolita added.

Excerpt from:
Watch: How Abu Dhabi is ushering in a new era of computing with state-of-the-art quantum lab - Gulf News

Read More..

How Data Has Changed the World of HR – ADP

In this "On the Job" segment from Cheddar News, Amin Venjara, General Manager of Data Solutions at ADP, describes the importance of data and how human resources leaders are relying on real-time access to data now more than ever. Venjara offers real-world examples of data's impact on the top challenges faced by organizations today.

Businesses big and small have been utilizing the latest tech and innovation to make the new remote and hybrid working environments possible.

Speaking with Cheddar News, above, Amin Venjara (AV), says relying on quality and accessible data to take action is how today's HR teams are impacting the modern workforce.

Q: How does data influence the role of human resources (HR)?

AV: The last few years have thrust HR teams into the spotlight. Think about all the changes we've seen managing the onset of the pandemic, the return to the workplace, the great resignation and all the challenges that's brought and even the increased focus on diversity, equity and inclusion. HR has been at the focal point of responding to these challenges. And in response, we've seen an uptick in the use of workforce analytics and benchmarking. HR teams need the data to be able to help make decisions in real time as things are changing. And they're using it with the executives and managers they support to make data-driven decisions.

Q: Clearly data-driven solutions are critical in today's workforce as you've been discussing, where has data made the most significant impact?

AV: When we talk to employers, we continuously hear about four key areas related to their workforce: attracting top talent, retaining and engaging talent, optimizing labor costs, and fostering a diverse, equitable and inclusive workforce.

To give an example of the kind of impact that data can have. We have a product that helps organizations calculate and take action on pay equity. They can see gaps by gender and race ethnicity and based on internal and market data. Over 70% of active clients using this tool are seeing a decrease in pay equity gaps. If you look at the size of this - they're spending over a billion dollars to close those gaps. That's not just analytics and data - that's taking action. So, think about the impact that has on the message about equal pay for equal work. And also, the impact it has on productivity, and the lives of those individual workers and their families.

Q: In today's tight talent market, employers increasingly need help recruiting and even retaining workers. How can data and machine learning alleviate some of those very pressing challenges?

AV: Here's an interesting thing about what's happening in the current labor market. U.S. employment numbers are back to pre-pandemic levels with 150 million workers on the payroll. However, we're at the lowest unemployed workers to jobs openings rate we've seen in over 15 years. To put it simply, it's a candidate's market out there; and jobs are chasing workers.

Two things to keep in mind: employers have to employ data-driven strategies to be competitive. So we're seeing with labor markets changing, remote work, hybrid work, expectations on pay and even the physical locations of workers people have moved a lot. Employers need access to real-time data, accurate data on supply and demand of labor and on compensation to hire the right workers and keep the ones they have.

The second thing is really about the adoption of machine learning in recruiting workflows. We're seeing machine learning being adopted in chatbots for personalizing the experience and even helping with scheduling, but also AI-based algorithms to help score candidate profiles against jobs. Overall, the best organizations are combing technology and data with their recruiting and hiring managers to decrease the overall time to fill open jobs.

Q: Becoming data confident might be a concern or even perhaps intimidating for some, but what's an example of how an organization can use data well?

AV: A lot of organizations are trying to make this happen. We recently worked with a quick service restaurant with about 2,000 locations across the U.S. In light of the supply chain challenges and demographic shifts of the last couple of years, they wanted to know how to combine and optimize the supply at each location based on expected demand.

Their research enabled them to correlate demographics, things like age, income and even family status to items on the menu like salads, sandwiches and kids' meals. But what they needed was a stronger signal on what's happening in the local context of each location. They had used internal data for so long, but things had shifted. By using our monthly anonymized and aggregated data from nearly 20% of the workforce, they were able to optimize their demand forecasting models and increase their supply chain efficiency. There are two lessons to think about. They had a key strategic problem, and they worked backwards from that. That's a key piece of becoming data confident - focusing on something that matters and making a data-driven decisions about it. The second is about going beyond the four walls of your organization. There are so many different and new sources of data available due to the digitization of our economy. In order to lock the insight and the strength of signal you need you really need to look for the best sources to get there.

Q: How do you see the role of data evolving as we look toward the future of work?

AV: Data has really come the language of business right now. I see a couple of trends as we look out. The first is the acceleration of data in the flow of work. When you look at a lot of organizations today, when people need data, they have to go to a reporting group or a business intelligence group to request the data. Then it takes a couple cycles to get it right and then make a decision. The cycle time can be high.

What I expect to see now is data more and more in the flow of work where business decision makers are working immediately; they have the right data at their fingertips. You see that across domains. Second is just the separation between haves and have nots. With the increasing speed of change, data haves are going to be able to outstrip data have nots. Those who have invested in building the right organizational, technical, and cultural muscle will see the spoils of this in the years to come.

Learn more

In the post-pandemic world of work, the organizations that prioritize people first will rise to the top. Find out how to make HR more personalized to adapt to today's changing talent landscape. Get our guide: Work is personal

Tags: Compensation Diversity and Inclusion Trends and Innovation Salary and Wages Technology HCM Technology HR Recruiting and Hiring Articles

The rest is here:
How Data Has Changed the World of HR - ADP

Read More..

ClearBuds: First wireless earbuds that clear up calls using deep learning – University of Washington

Engineering | News releases | Research | Technology

July 11, 2022

ClearBuds use a novel microphone system and are one of the first machine-learning systems to operate in real time and run on a smartphone.Raymond Smith/University of Washington

As meetings shifted online during the COVID-19 lockdown, many people found that chattering roommates, garbage trucks and other loud sounds disrupted important conversations.

This experience inspired three University of Washington researchers, who were roommates during the pandemic, to develop better earbuds. To enhance the speakers voice and reduce background noise, ClearBuds use a novel microphone system and one of the first machine-learning systems to operate in real time and run on a smartphone.

The researchers presented this project June 30 at the ACM International Conference on Mobile Systems, Applications, and Services.

ClearBuds differentiate themselves from other wireless earbuds in two key ways, said co-lead author Maruchi Kim, a doctoral student in the Paul G. Allen School of Computer Science & Engineering. First, ClearBuds use a dual microphone array. Microphones in each earbud create two synchronized audio streams that provide information and allow us to spatially separate sounds coming from different directions with higher resolution. Second, the lightweight neural network further enhances the speakers voice.

While most commercial earbuds also have microphones on each earbud, only one earbud is actively sending audio to a phone at a time. With ClearBuds, each earbud sends a stream of audio to the phone. The researchers designed Bluetooth networking protocols to allow these streams to be synchronized within 70 microseconds of each other.

The teams neural network algorithm runs on the phone to process the audio streams. First it suppresses any non-voice sounds. And then it isolates and enhances any noise thats coming in at the same time from both earbuds the speakers voice.

Because the speakers voice is close by and approximately equidistant from the two earbuds, the neural network can be trained to focus on just their speech and eliminate background sounds, including other voices, said co-lead author Ishan Chatterjee, a doctoral student in the Allen School. This method is quite similar to how your own ears work. They use the time difference between sounds coming to your left and right ears to determine from which direction a sound came from.

Shown here, the ClearBuds hardware (round disk) in front of the 3D printed earbud enclosures.Raymond Smith/University of Washington

When the researchers compared ClearBuds with Apple AirPods Pro, ClearBuds performed better, achieving a higher signal-to-distortion ratio across all tests.

Its extraordinary when you consider the fact that our neural network has to run in less than 20 milliseconds on an iPhone that has a fraction of the computing power compared to a large commercial graphics card, which is typically used to run neural networks, said co-lead author Vivek Jayaram, a doctoral student in the Allen School. Thats part of the challenge we had to address in this paper: How do we take a traditional neural network and reduce its size while preserving the quality of the output?

The team also tested ClearBuds in the wild, by recording eight people reading from Project Gutenberg in noisy environments, such as a coffee shop or on a busy street. The researchers then had 37 people rate 10- to 60-second clips of these recordings. Participants rated clips that were processed through ClearBuds neural network as having the best noise suppression and the best overall listening experience.

One limitation of ClearBuds is that people have to wear both earbuds to get the noise suppression experience, the researchers said.

But the real-time communication system developed here can be useful for a variety of other applications, the team said, including smart-home speakers, tracking robot locations or search and rescue missions.

The team is currently working on making the neural network algorithms even more efficient so that they can run on the earbuds.

Additional co-authors are Ira Kemelmacher-Shlizerman, an associate professor in the Allen School; Shwetak Patel, a professor in both the Allen School and the electrical and computer engineering department; and Shyam Gollakota and Steven Seitz, both professors in the Allen School. This research was funded by The National Science Foundation and the University of Washingtons Reality Lab.

For more information, contact the team at clearbuds@cs.washington.edu.

Here is the original post:
ClearBuds: First wireless earbuds that clear up calls using deep learning - University of Washington

Read More..

C3 AI Named a Leader in AI and Machine Learning Platforms – Business Wire

REDWOOD CITY, Calif.--(BUSINESS WIRE)--C3 AI (NYSE: AI), the Enterprise AI application software company, today announced that Forrester Research has named it a Leader in AI and Machine Learning Platforms in its July 2022 report, The Forrester Wave: AI/ML Platforms, Q3 2022.

Ahead of its time, C3 AIs strategy is to make AI application-centric by building a growing library of industry solutions, forging deep industry partnerships, running in every cloud, and facilitating extreme reuse through common data models, the report states.

We are pleased to be recognized as a leader in AI and ML platforms," said Thomas Siebel, C3 AI CEO. Im delighted to see C3 AIs significant investments in enterprise AI software be acknowledged. I believe that Forrester Research has made an important contribution, having published the first professional comprehensive analysis of enterprise AI and Machine Learning platforms, Siebel continued, changing the dialogue from a focus on disjointed tools to the importance of cohesive enterprise AI platforms. This is certain to accelerate the market adoption of enterprise AI and simplify often protracted decision processes.

Of the 15 vendors in the report, C3 AI received the top ranking in the Strategy category. For the following criteria, C3 AI received:

Download The Forrester Wave: AI and Machine Learning Platforms, Q3 2022 report here.

About C3 AI

C3 AI is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 AI Suite, an end-to-end platform for developing, deploying, and operating enterprise AI applications and C3 AI Applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally.

Read more from the original source:
C3 AI Named a Leader in AI and Machine Learning Platforms - Business Wire

Read More..

Automated identification of hip arthroplasty implants using artificial intelligence | Scientific Reports – Nature.com

Study design and radiograph acquisition

After institutional review board approval, we retrospectively collected all radiographs taken between June 1, 2011 and Dec 1, 2020 at one university hospital. The images are collected by Neusoft PACS/RIS Version 5.5 on a personal computer running Windows 10. We confirm that all methods were performed in accordance with the relevant guidelines and regulations. Images were collected from surgeries performed by 3 fellowship-trained arthroplasty surgeons to ensure a variety of implant manufacturers and implant designs. At the time of collection, images had all identifying information removed and were thus de-identified. Implant type was identified through the primary surgery operative note and crosschecked with implant sheets. Implant designs were only included in our analysis if more than 30 images per model were identified14.

From the medical records of 313 patients, a total of 357 images were included in this analysis.

Although Zimmer and Biomet merged (Zimmer Biomet), these were treated as two distinct manufacturers. The following 4 designs from the four industry leading manufacturers were included: Biomet Echo Bi-Metric (Zimmer Biomet), Biomet Universal RingLoc (Zimmer Biomet), Depuy Corail (Depuy Synthes), Depuy Pinnacle (Depuy Synthes), LINK Lubinus SP II, LINK Vario cup, and Zimmer Versys FMT and Trilogy (Zimmer Biomet). Implant designs that did not meet the 30-implant threshold were not included. Figure1 demonstrated an example of Cup and Stem anteriorposterior (AP) radiographs of each included implant design. The four types of implants are denoted as type A, type B, type C, and type D respectively in this paper.

Demonstrated an example of cup and stem radiographs of each included implant design.

We used convolutional neural network-based (CNN) algorithms for classification of hip implants. Our training data consist of images of anteroposterior (AP) view of the hips. For each image, we manually cut the image into two parts: the stem and the cup. We will train four CNN models, the first one using stem images (stem network), the second one using cup images (cup network), and the third one using the original uncut images (combined network). The fourth one is an integration of the models trained with stem network and the cup network (joint network).

Since the models involve millions of parameters, while our data set only contained less than one thousand images, it was infeasible to train a CNN model from scratch using our data. Therefore, we adopted the transfer learning framework to train our networks17. The transfer learning framework is a paradigm in the machine learning literature that is widely applied in scenarios where the training data is scarce compared to the scale of the model18. Under the transfer learning framework, the model is first initialized to some model pretrained with other data sets that contain enough data for a different but related task. Then, we tune the model using our data set by performing gradient descent (backward-propagation) only on the last two layers of the networks. As the parameters in the last two layers of the network are comparable with the size of our data set (for the target task), and the parameters in the previous layers have been tuned from the pre-trained model, the resulting network model can have satisfactory performance on the target task.

In our case, our CNN models we used are based on the established ResNet50 network pre-trained on the ImageNet data set19. The target task and our training data sets correspond to the images of the AP views of the hips (stem, cup, and combined).

Figure2 demonstrates the overview of the framework of our deep learning-based method.

Overview of the framework of our deep learning-based method.

Our dataset contained 714 images from 4 different kinds of implants.

We followed standard procedures to pre-process our training data so that it could work with a network trained on ImageNet. We rescaled each image to a size of 224*224 and normalized it according to ImageNet standards. We also performed data augmentation, i.e., random rotation, horizontal flips, etc., to increase the amount of training data and make our algorithm robust to the orientation of the images.

We first divided the set of patients into three groups of sizes~60% (group 1),~30% (group 2), and~10% (group 3). This split technique was used on a per-design basis to ensure the ratio of each implant remained constant. Next, we used the cup and stem images of patients in group 1 for training, those of patients in group 2 for validation, and those of patients in group 3 for testing. The validation set was used to compute cross-validation loss for hyper-parameter tuning and early stopping determination.

We adopted the adaptive gradient method ADAM20 to train our models. Based on the cross-validation loss, we chose the hyper-parameters for ADAM as (learning rate (mathrm{alpha }) = 0.001, ({upbeta }_{1}=0.9, {beta }_{2}=0.99, epsilon ={10}^{-8},) weight_decay=0). The maximum number of epochs was 1000 and the batch size was 16. The early stopping threshold was set to 8. During the training process of each network, the early stopping threshold was hit after around 50 epochs. As we mentioned above, we trained four networks in total.

The first network is trained with the stem images, the second with the cup images. The third network is trained with the original uncut images, which is one way we propose to combine the power of stem images and cup images. We further integrate the first and the second network as an alternative way of jointly utilizing stem and cup images. The integration was done via the following logistic-regression based method. We collected the outputs of the stem network and the cup network (both are of the form of a 4-dimensional vector, with each element corresponding to the classification weight the network gives to the category of implants), and then fed them as the input to a two-layer feed-forward neural network, and trained the network with the data from the validation set. The integration is similar to a weighted-voting procedure among the outputs of the stem network and the cup network, with the weighting votes computed through the validation data set. Note that the above construction relied on our dataset division procedure, where the training set, validation set, and testing set, each contained the stem and cup images of the same set of patients. We referred to the resulting network constructed from the outputs of stem network and cup network as the joint network.

We tested our models (stem, cup, Joint) using the testing set. The prediction result on each testing image was a 4-dimensional vector, with each coordinate representing the classification confidence of the corresponding category of implants.

Since we were studying a multi-class classification problem, we would directly present the confusion matrices of our methods on the testing data, and compute the operation characteristics generalized for multi-class classification.

The institutional review board approved the study with a waiver of informed consent because all images were anonymized before the time of the study.

Read more:
Automated identification of hip arthroplasty implants using artificial intelligence | Scientific Reports - Nature.com

Read More..

Harnessing the power of artificial intelligence – UofSC News & Events – SC.edu

On an early visit to the University of South Carolina, Amit Sheth was surprised when 10 deans showed up for a meeting with him about artificial intelligence.

Sheth the incoming director of the universitys Artificial Intelligence Institute at the time thought he would need to sell the deans on the idea. Instead, it was them pitching the importance of artificial intelligence to him.

All of them were telling me why they are interested in AI, rather than me telling them why they should be interested in AI, Sheth said in a 2020 interview with the universitys Breakthrough research magazine. The awareness of AI was already there and the desire to incorporate AI into the activities that their faculty and students do was already on the campus.

Since the university announced the institute in 2019, that interest has only grown. There are now dozens of researchers throughout campus exploring how artificial intelligence and machine learning can be used to advance fields from health care and education to manufacturing and transportation. On Oct. 6, faculty will gather at the Darla Moore School of Business for a panel discussion on artificial intelligence led by Julius Fridriksson, vice president for research.

South Carolina's efforts stand out in several ways: the collaborative nature of research, which involves researchers from many different colleges and schools; a commitment to harnessing the power of AI in an ethical way; and the university's commitment to projects that will have a direct, real-world impact.

This week, as the Southeastern Conference marks AI in the SEC Day, we look at some of the remarkable efforts of South Carolina researchers in the area of artificial intelligence.

See the original post:
Harnessing the power of artificial intelligence - UofSC News & Events - SC.edu

Read More..

In iOS 16 A New iPhone Tool Makes Photobombing A Thing of the Past – CNET

This story is part of WWDC 2022, CNET's complete coverage from and about Apple's annual developers conference.

Apple'siOS 16will include a lot of new iPhone features likeeditable Messagesand acustomizable lock screen. But there was one feature that truly grabbed my attention during WWDC 2022, despite taking up less than 15 seconds of the event.

The feature hasn't been given a name, but here's how it works: You tap and hold on a photo to separate a picture's subject, like a person, from the background. And if you keep holding, you can then "lift" the cutout from the photo and drag it into another app to post, share or make a collage, for example.

Technically, the tap-and-lift photo feature is part of Visual Lookup, which was first launched with iOS 15 and can recognize objects in your photos such as plants, food, landmarks and even pets. In iOS 16, Visual Lookup let you lift that object out of a photo or PDF by doing nothing more than tapping and holding.

During the WWDC, Apple showed someone tapping and holding on the dog in a photo to lift it from the background and share in a Message.

Robby Walker, Apple senior director of Siri Language and Technologies, demonstrated the new tap-and-lift tool on a photo of a French bulldog. The dog was "cut out" of the photo and then dragged and dropped into the text field of a message.

"It feels like magic," Walker said.

Sometimes Apple overuses the word "magic," but this tool does seem impressive. Walker was quick to point out that the effect was the result of an advanced machine-learning model, which is accelerated by core machine learning and Apple's neural engine to perform 40 billion operations in a second.

Knowing the amount of processing and machine learning required to cut a dog out of a photo thrills me to no end. Many times new phone features need to be revolutionary or solve a serious problem. I guess you could say that the tap-and-hold tool solves the problem of removing the background of a photo, which to at least some could be a serious matter.

I couldn't help notice the similarity to another photo feature in iOS 16. On the lock screen, the photo editor separates the foreground subject from the background of the photo used for your wallpaper. This makes it so lock screen elements like the time and date can be layered behind the subject of your wallpaper but in front of the photo's background. It makes it look like the cover of a magazine.

I tried the new Visual Lookup feature in the Public Beta for iOS 16. I am still impressed how quickly and reliably it works. If you have a spare iPhone to try it on, a developer beta for iOS 16 is already available and a public beta version of iOS 16 will be out in July.

For more, check out everything that Apple announced at WWDC 2022, including the new M2 MacBook Air.

See the original post here:
In iOS 16 A New iPhone Tool Makes Photobombing A Thing of the Past - CNET

Read More..

UF partners with CIA on improving cybersecurity – News – University of Florida – University of Florida

From the shutdown of an oil pipeline to disrupted access to government, business and healthcare system databases, high-profile cyberattacks in 2021 prompted heightened interest in improving the nations cybersecurity.

Answers on how to do that may come from a collaboration between the University of Florida and the U.S. Central Intelligence Agency, the first of its kind in the nation.

The university and the CIA have entered an agreement to study how artificial intelligence and machine learning applications (AIML) can be used to detect and deter malicious agents that infiltrate computer networks. The work will be carried out by researchers associated with UFs Florida Institute for National Security.

"If you're operating retroactively in cybersecurity, oftentimes you are too late," said Damon Woodard, principal researcher and newly appointed director of the Florida Institute for National Security. "This collaboration will accelerate our ability to understand and expand the research on AI applications of AIML to cybersecurity."

One area of research will be on reinforcement learning, which attempts to mimic how humans learn through trial-and-error. Woodard said little work has been done on this method of machine learnings application to cybersecurity problems. Researchers will explore this technology on simple problems and then see if solutions can be scaled up.

In terms of a cyberattack, you are trying to figure out what the person attacking you is trying to do so you can anticipate and make adjustments on your side to stop them, Woodard said.

The Identity Theft Resource Center reported in January there were 1,603 cyberattack-related data breaches in 2021, an increase of about 500 over the previous year. Ransomware attacks are also on the rise, doubling in each of the past two years, the nationally recognized nonprofit organization said.

The hope, Woodard said, is the work will revolutionize the way the world thinks about cybersecurity and provide insights and technologies that can better protect data and strengthen security across both the government and private sectors. The team also includes two UF graduate students.

Im excited to see the ramifications of this project in the security domain as well as in other domains, such as biomedical and business, said Olivia Dizon-Paradis, a doctoral student in Electrical and Computer Engineering. Im hoping my involvement in this project will help jumpstart my research career in lifelong machine learning.

Stephen Wormald, also a doctoral student in Electrical and Computer Engineering, said he was excited about being able to work with leading researchers to develop state-of-the-art technology.

My involvement will develop personal skills in research, writing and mathematics that I can use long-term in industry, Wormald said. I hope to apply my skills to develop technology and study basic research problems that improve individuals quality of life.

The Florida Institute for National Security was launched in May with the goal of taking a leading role in multidisciplinary research on national security through long-term partnerships with industry, academe and government that lead to commercial products and spin-off companies.

The project is the latest initiative in UFs sweeping focus on artificial intelligence, a $1billion effort to advance AI across the curriculum and in research and industry. The universitys initiative -- andthe work ofthe institute -- is aided by access totheHiPerGator supercomputer.

Woodard said working with the CIA offers the opportunity to share project expertise and provides exposure to many diverse challenges.

"Working with the CIA is a major benefit because they present interesting constraints in cybersecurity," Woodard said. "You're dealing with worst-case scenarios to prepare for everything from low-quality data to low-resolution images. This level of research allows us to reach our full capacity for understanding potential shortcomings."

Excerpt from:
UF partners with CIA on improving cybersecurity - News - University of Florida - University of Florida

Read More..