Category Archives: Data Science
Second Genome Presents New Preclinical Data at AACR 2022 Demonstrating that SG-3-06686, a CXCR3 Chemokine Receptor Modulator, Induces Migration of…
Lead oncology program candidate SG-3-06686 shows potential to become first in class CXCR3 immune activator
BRISBANE, Calif., April 8, 2022 /PRNewswire/ -- Second Genome, a biotechnology company that leverages its proprietary platform to discover and develop precision therapies and biomarkers, presented data demonstrating the Company'sCXCR3 chemokine receptor modulator, SG-3-06686 (referred to as SG-3-00802DC in the AACR presentation), enhances effector T cell migration to improve the immune system's activity against tumors, and showed anti-tumor activity in preclinical models as a monotherapy and in combination with anti-Programmed Death Protein-1 (PD-1) treatment. The data is being presented at the American Association for Cancer Research (AACR) Annual Meeting, held April 813 virtually and in New Orleans, Louisiana.
"We are excited about our pre-clinical data that show the ability of this potential mechanism to go beyond checkpoint blockade as an emerging new immunologic strategy for treating cancers. Checkpoint inhibitor therapy has been transformative for the clinical outcome of cancer patients and additional new checkpoint targets, such as TIGIT/LAG3, continue to be added to this important treatment approach. However, if the proper immune cells are underrepresented or lacking in the tumor microenvironment, these interventions tend to be less effective, and a significant portion of patients have limited or transient benefit. To augment and improve anti-tumor efficacy, we need to develop new therapeutics to better facilitate the ability of effector cells to access the tumor microenvironment. CXCR3 pathway modulation is a well validated and exciting approach to potentially enhance effector cell recruitment and improve existing immunotherapy interventions," said Joe Dal Porto, Ph.D., Chief Scientific Officer of Second Genome. "We look forward to submitting an investigational new drug (IND) application for SG-3-06686, a potential first in class CXCR3 immune modulator, in early 2023."
This data presentation can be accessed during the online-only poster session at the AACR. The information is provided below:
Session Category: Clinical Research Excluding TrialsSession Title: Immuno-oncologyAbstract Number: 6349Title: Targeting the CXCR3 pathway with a novel peptide drug candidate mobilizes theimmune system to enhance anti-tumor immunity
SG-3-06686 is a potent CXCR3 chemokine receptor engager that acts as a positive allosteric modulator to increase receptor activity to the three known ligands (CXCL9/10/11). It has demonstrated to increase the activity of CXCL11 on CXCR3 activation by greater than 10-fold from a nM to a pM range with similar effects on CXCL9 and CXCL10. This activity in turn drives strong antitumor activity in several preclinical cancer models.
The poster (#6349) entitled, "Targeting the CXCR3 pathway with a novel peptide drug candidate mobilizes the immune system to enhance anti-tumor immunity," will be available for on-demand viewing on the AACR website and will also be made available on the Company's website at https://www.secondgenome.com/events/.
About Second Genome Second Genome is a biotechnology company that leverages its proprietary technology-enabled platform to discover and develop transformational precision therapies based on novel microbial genetic insights. We built a proprietary drug discovery platform with machine-learning analytics, customized protein engineering techniques, phage library screening, mass spec analysis and CRISPR, that we couple with traditional drug development approaches to progress the development of precision therapies for wide-ranging diseases. Second Genome is advancing lead programs in IBD and cancer into IND-enabling studies. We also collaborate with industry, academic and governmental partners to leverage our platform and data science capabilities. We hold a strategic collaboration with Gilead Sciences, Inc., utilizing our proprietary platform and comprehensive data sets to identify novel biomarkers associated with clinical response to Gilead's investigational medicines. We also hold a strategic collaboration with Pfizer (formerly Arena Pharmaceuticals) to identify microbiome biomarkers associated with clinical response for their lead program in gastroenterology, etrasimod. For more information, please visit http://www.secondgenome.com.
Investor Contact: Argot Partners212-600-1902[emailprotected]
Media Contact: Argot Partners212-600-1902[emailprotected]
SOURCE Second Genome
See the article here:
BYTES 2022 ASEAN – get ready to witness the largest Big Data Analytics summit ever – Latest Digital Transformation Trends | Cloud News – Wire19
BYTES 2022 is a two-day virtual mega-scale summit focused on Big Data Analytics that will be shedding light on the Southeast Asia markets and their requirements. The goal? To facilitate collaboration between various participating organizations and some of the top technology providers from the data and analytics industry.
The data-driven Southeast Asian economy is projected to reach $1 trillion by 2030, and the Big Data Analytics industry stands at its forefront. The increasing digitization of this region alongside an IT-rich environment has generated colossal volumes that are fuel for generating valuable insights in formulating business strategies or improving operational efficiency across all industries including those related directly to technology like robotics.
Based on these unprecedented developments and after receiving phenomenal success in the African region, Tradepass will be hosting its flagship Big Data Analytics Summit BYTES, in ASEAN region virtually on 12 13 April 2022.
The Big Data Analytics Summit will be a huge event, with 1000+ professionals from across ASEAN and 30 world-class speakers sharing the latest developments in this field. The Analytics Summit will be a great opportunity for data professionals in the region to learn about cutting-edge solutions and conferences from other parts of Asia, as well as 20+ internationally renowned speakers who are experts on this topic.
In a statement issued, following their participation announcement, the Global Field CTO at Databricks, Chris DAgostino expressed, Only 13% of organizations are succeeding at their data and AI strategy yet the successful application of data and AI has never been a greater necessity for survival than now. Enterprises need a modern data and AI strategy that is open, simple, collaborative, and empowers everyone across the organization to make faster, more informed decisions with a unified view of all their data.
Judy Nam (Principal Solutions Engineer Director, Dataiku) while confirming her participation as a speaker gave an insight on the growth of AI, Developing an AI strategy for your business is becoming standard to keep up with the competition, but there are many challenges along the way. Im excited to be sharing how organizations can enable everyday AI across business functions while growing the complexity and scale of AI models and projects.
Organizations like Databricks, Dataiku, Cloudera, Tech Data, BMC, Aerospike, Denodo, Snowflake, and Fivetran will also be participating in the summit to showcase their latest solutions.
The virtual summit will revolve around insightful presentations on the most pressing industry topics and deep-dive panel discussions with some of the leading thought leaders, live showcases of the best solutions, etc.
Some of the confirmed speakers for the summit include Chris DAgostino, Global Field CTO, Databricks; Judy Nam, Principal Solutions Engineer Director, Dataiku; Sam Majid, Chief Technology & Innovation Officer, Malaysian Communications & Multimedia Commission; Juan Kanggrawan, Head Data Analytics & Digital Products, Jakarta Smart City; Abhishek Pratap Singh, Senior Vice President Engineering, DBS; Ram Kumar, Chief Data & Analytics Officer, Cigna; Varun Verma, APAC Head of Data & Analytics, The Heineken Company; Kirill Odintsov, Head of Data Science, Home Credit Indonesia; Akanksha Rastogi, Head of Data and Insights, Foodpanda Thailand; John Daniel Funtanilla, Advanced Analytics Lead, Nestle Philippines; Nikola Sucevic, Senior Vice President Advance Analytics, Smartfren, and many others.
Another key speaker for the summit Sam Majid (Chief Technology & Innovation Officer, Malaysian Communications & Multimedia Commission) expressed his view on the industry, To thrive in the digital economy, avant-garde organizations leverage on its leadership and widely accessible data to continuously deliver and delight citizens and customers.
Organizer and CEO of Tradepass, Sudhir Jena expressed, With Southeast Asia rapidly adopting emerging technologies, Bytes 2022 ASEAN will acknowledge the many intricacies concerning Big Data Analytics to empower the organizations with improved business function.
Wire19 is the official media partner of the event. Subscribe to our newsletter here to receive all the latest updates directly in your inbox.
Read Next: Wire19s Listing of Global Tech Events in 2022 Hybrid and Digital IT, Telecom, AI Conferences
Go here to see the original:
Goldacre recommendations to improve care through use of data – GOV.UK
Patient care in the NHS will be improved through more efficient and safer access to health data, which will drive innovation and lead to potentially lifesaving research.
Professor Ben Goldacre, Bennett Professor of Evidence-Based Medicine at the University of Oxford, has today published the findings from his independent review into how the NHS can achieve better, broader and safer use of health data. Learning lessons from the pandemic, the review advises how to utilise health data in healthcare and sets out 185 recommendations to the government.
The pandemic has demonstrated the immense value of health data in driving research to improve patient outcomes and save lives such as the discovery of dexamethasone as the first treatment for COVID-19. Large-scale data analysis enabled better understanding of patient outcomes more rapidly than previously possible. The speed and scale of this data analysis was possible through the interconnected nature of NHS systems, as well as specific legal measures to enable data access quickly.
Data also allows the NHS to continue delivering world-leading care, for instance by helping to understand whether different patient groups respond better to different treatment options, or to anticipate future demands on the healthcare system by tracking the prevalence of disease.
Health and Social Care Secretary, Sajid Javid, said:
Countless lives have been saved through the pandemic after health data enabled ground-breaking research.
As we move forwards, millions of patients could benefit from the more efficient use of health data through boosting innovation and ensuring the NHS can continue to offer cutting-edge care, saving lives.
I want to thank Professor Ben Goldacre, his team, and all those who contributed to this review this work, alongside our upcoming data strategy, will help to transform the NHS on our road to recovery.
The review makes a range of proposals, including:
Professor Ben Goldacre said:
NHS data is a phenomenal resource that can revolutionise healthcare, research and the life sciences. But data alone is not enough. We need secure, efficient platforms and teams with skills to unleash this potential. This will be difficult, technical work. It is inspiring to see momentum grow for better, broader, safer use of health data across so many sectors.
The governments response to the review will be included in the upcoming Health and Social Care Data Strategy which will set the direction for the use of data in a post-pandemic healthcare system.
The Goldacre Review was launched on 9 February 2021.
The Data Strategy was published in draft form in June 2021.
The government recently announced 200 million of funding for health data research and development.
Ben Goldacre is a clinical researcher at the University of Oxford where he is Director of the Bennett Institute for Applied Data Science, and Bennett Professor of Evidence-Based Medicine in the Nuffield Department of Primary Care Health Sciences.
He advises government on better uses of data and leads an academic team that uses large health datasets to deliver research papers and tools including:
He is also active in public engagement. His books, including Bad Science, have sold over 700,000 copies in more than 30 countries and his online lectures have over 5 million views.
Original post:
Goldacre recommendations to improve care through use of data - GOV.UK
V&A Consulting Engineers Announces the Release of Their Proprietary VANDA GRAVITY MAIN CLEANING INDEX – GlobeNewswire
OAKLAND, Calif., April 06, 2022 (GLOBE NEWSWIRE) -- The VANDA Gravity Main Cleaning Index was created by V&A to provide a standard for recording condition assessment data during gravity main cleaning with objective criteria. The index presents typical examples of conditions observed when cleaning gravity main segments. This index also becomes the standard rating methodology for machine learning need-for-cleaning predictions (characterized as LOFlikelihood of failure) generated by V&A's data science services.
"The development of our indexsupports improved data recording and standardized reporting. We are driven to contribute to the water and wastewater industries with innovation, alignment, and collaboration, andthe indexes are just one way we are supporting municipalities," saidDebra Kaye, V&A CE0 & President.
The index is presented as a durable ruler to be a tool for professionals assessing and reporting gravity main cleaning conditions. The VANDA Gravity Main Cleaning Index facilitates data collection that builds a foundation for data analysis and improved gravity main cleaning. The adoption of a standardized index, positions collection systems for low cost, sophisticated analytics that provides maintenance supervisors with insights for improving the cleaning processes.
"V&A has been working with collection systems for the past two years to understand how to enable data-driven recommendations to gravity main cleaning process decision making. Because gravity mains are typically cleaned prior to CCTV inspection, CCTV data that provides operational guidance for removal of FOG, roots, or debris is limited," said Lars Stenstedt, V&A Data Science Manager. "Promoting the adoption of a cleaning condition assessment data standard is an important step in enabling collection systems to take advantage of available data science techniques and processes to become as efficient as possible with sewer system overflow (SSO) prevention maintenance."
CWEA Annual Conference attendees are invited to visit booth #629 for free copy of the VANDA Cleaning Index and attend "Gravity Sewer Main Cleaning: Innovative Use of Data Science for Optimization" presented by Lars Stenstedt on April 11, 10 a.m. at the Hyatt Regency Sacramento 1209 L Street.
"Padre Dam has been collecting condition assessment data during gravity main cleaning for many years. This type of data has enabled Padre Dam to leverage our cleaning crew knowledge and observations for continuous improvement of our maintenance processes, Padre Dam supports V&A's promotion of the importance of gathering this kind of condition assessment data during gravity main cleaning," said Daniel Lockart, Maintenance Supervisor, Padre Dam Municipal Water District.
"The Town of Hillsborough collects condition assessment data during the cleaning process and has been working with V&A on the development of this standard index for the past year," said Rick Pina, Sewer Supervisor, Town of Hillsborough. "The process of developing this standard by leveraging senior staff and their institutional knowledge has enabled Hillsborough to streamline the onboarding and training of new maintenance staff. Understanding data-driven predictions helps us to focus our maintenance efforts removing FOG, roots and debris, as well as developing future CIP projects for the collection system."
About V&A Consulting Engineers
Headquartered in Oakland, CA, with offices in San Diego, CA, Houston, TX, and Sarasota, Florida, and founded in 1979, V&A Consulting Engineers (V&A) is a multidisciplined engineering team, led by Debra Kaye, CEO, concentrating on civil infrastructureprimarily in the fields of water, wastewater, and light rail transit.Visithttp://www.vaengineering.com.
Media Contact: Robin Rhea| rrhea@vaengineering.com 510.903.6600
Related Images
Image 1: V&A Logo
V&A Consulting Engineers Logo
This content was issued through the press release distribution service at Newswire.com.
See the original post here:
Leadership, talent and the digital future – Bangkok Post
How can leaders approach future-proofing their workforce?
I am frequently amazed at how much digital technology has transformed my work and the work of my people. For me as a senior leader, some aspects are great. It is much easier to communicate with more people inside and outside the organisation. Receiving updates is easier. I can also contribute to things like recruitment more effectively. At the same time, many of the skills I need have been transformed, and some are obsolete.
I am very conscious that the effect of digitisation is much bigger for many of my people. They may be more digitally native, but they do not have the experience of other staff in a rapidly changing workplace.
However, these people, many of whom we can consider the future of our organisations, cannot just be replaced. There simply is no alternative talent available to replace them. Demand far outstrips supply. We face a highly complex situation full of challenges and opportunities. The answer, I believe, is to develop these people as digital talent, with the right blend of capabilities.
The challenge is that technology and possibilities are outpacing the traditional education approaches many companies cling to, even if most development now takes place online. If leaders do not commit to developing digital talent at scale, they will run their businesses into economic choke points.
Additionally, the young people coming into the workforce today do not have the digital skills their industry requires. The available digital talent will be more expensive, and companies will not have a big enough pool of skilled workers in data science and AI.
There is no one-size-fits-all answer to these challenges.
My organisation has gone increasingly digital in the last year or so. We have added entire new departments and completely new types of talent. We are a relatively small and specialised organisation. We do not have the advantages of many big organisations, but we also do not face all the challenges of smaller businesses. We have made many mistakes, and we are still learning, but I would like to share a few of my observations in the hope they may help leaders like myself.
First, leaders need to take some time to understand the digital skills their future business needs. Do not jump at whatever is trendy. For us, data was a capability we needed to build. We also had to learn more about virtual delivery very quickly. Understanding helped us plan for development at scale and effectiveness in the areas critical for our business. I also had to consider our culture to identify potential barriers and actively lead my people in the required direction.
Second, I had to become along with the rest of the leadership team somewhat of a digital champion. I had to talk about digital. I had to be seen to become more digitally savvy, which was an enjoyable challenge. I had to highlight potential opportunities to use technology to do things better. Leaders doing this is essential. Leaders who dont become more digitally inclined can become massive barriers to successful wider adoption.
Third, I had to rethink how I could digitally educate my people at scale. I was lucky that I had some people already engaged in their own digital upskilling. Additionally, being in the education business provided some insights and resources. But fundamentally, how we did things transformed.
There were much fewer classes and courses. Instead, there were learning journeys and applications for our organisations key jobs to be done, and there were challenges. Since many of us were relatively new to this, there was a lot more discussion, teaching each other, and sharing of tips and insights.
It worked, and although we have a long way to go on what is probably a never-ending journey, some of my most senior leaders and people are remarkably digitally more advanced than I would have thought possible.
Finally, I learned that developing hard digital skills by themselves was not sufficient. The data team I mentioned needed to develop the softer and thinking capabilities to not just make sense but to engage everyone else with the data.
In this time of a critical digital skills shortage, I do not believe there are any shortcuts. Leaders need to commit to a holistic approach to future-proofing their workforce. They have to commit to the long term and invest in their people.
They also need to teach their people that it is not just about building digital skills, it is about building the business. Whether you like it or not, for most companies, digital is now the business. Your customers are already more digital, and if your people do not have the digital skills to give them what they want, someone else will.
Even your dinosaurs can be brought along if you make developing their capabilities easy and user-friendly.
Arinya Talerngsri is Chief Capability Officer and Managing Director at SEAC Southeast Asias Lifelong Learning Center. She can be reached by email at arinya_t@seasiacenter.com or https://www.linkedin.com/in/arinya-talerngsri-53b81aa. Talk to us about how SEAC can help your business during times of uncertainty at https://forms.gle/wf8upGdmwprxC6Ey9
More here:
Canada needs workers so why aren’t more companies hiring the neurodivergent? – CBC News
The founders of a job fair for those with autism don't only wantto find careers for an untapped workforce they also hope employers will realize these highly skilled job seekers can help solve a national labour shortage.
"People with autism are very much capable of working and they are some of the best employees," said Neil Forester who, along with his business partner Xavier Pinto, created the Spectrum Works Job Fair that ran Friday.
Now in its sixth year, the job fair has grown from having 150 attendees to almost 2,000 job seekers with autism, all looking to connect with recruiters and hiring managers at major tech, finance, hospitality and retail companies across the country. Though it's been held in various cities, the job fair was a virtual event this year and last.
Getting companies to take part, though, has been a struggle.
Of the 10,000 employers Forester and his team have reached out to in the last six years, just 40 companies took part in this year's job fair.
"The majority of the time we don't get any response," Forester said.
The creators of the fair say they understand there is a wide range of abilities across the autism spectrum and, while perhaps not every person with autism is employable, both Forester and Pinto are confident a large portion of this community can and wants to work.
And Forester questions why more employers aren't looking at this neurodiverse talent pool to help solve the labour shortages that so many companies are experiencing.
In the last quarter of 2021, Canadian employers were looking to fill 915,500 jobs, up 63 per cent from the year before, according to Statistics Canada.
And with the current unemployment rate so low, "virtually all industries are bumping up against labour shortages," wrote Royal Bank economist Nathan Janzen in an economic update this week.
Even with the demand for workers, employment barriers remain for Canadians with autism.
Data compiled by the Public Health Agency of Canada found that in 2017 just 33 per cent of Canadian adults with autism reported being employed compared to 79 per cent of adults without a disability.
Forester said he was unaware of just how few neurodiverse employees there are in the workforce before he started the job fair.
"I just didn't realize how big of a problem this was or how big of an issue this was to the community," he said.
Javier Herrera is one of the comparatively few Canadians who are both employed and living with autism.
He attended the Spectrum Works job fair last year and got a job offer.
"It was overall a very positive experience. I met not only recruiters, but also other facilitators, coaches, government agencies, non-profits, you name it," said Herrera who now works as a business systems analyst with an insurance company based in Vancouver.
Herrera is encouraged to see that some employers purposefully seek out people with autism, but he feels that "as a society we are still doing baby steps" to get more people who are neurodiverse into the workforce.
That said, there are some companies specifically tapping into this talent pool, including two of the so-called "Big Four" accounting firms.
In the last few years, Ernst & Young has made strides in diversifying its hiring strategy.
The multinational launched the Neurodiversity Centre of Excellence in Toronto in November 2020, with a goal of recruiting employees with autism, ADHD or other sensory and cognitive differences.
"We're dying for talent as an organization," said Anthony Rjeily, a partner at Ernst & Young and the company's neurodiversity program national leader. "So we wanted to see if there was any talent pool out there that we could potentially tap into."
Since the launch of the program, the company has recruited 45 neurodiverse employees to their Toronto, Vancouver, Halifax and Montreal offices and plans to expand recruitment in other cities.
Rjeily said the initiative has more than paid off, noting the retention rate among neurodiverse candidates that the company has hired is 98 per cent.
"The level of creativity, the innovation, the productivity that they are able to deliver is incredible," he said.
Mohit Verma was one of the first people Ernst & Young hired in 2020 through the neurodiversity recruitment program.
"At EY my work revolves around certain sub-competencies such as automation, data science and, to some extent, blockchain," Mohit said in an interview with CBC News. "So far I have been part of five to six main projects."
Deloitte Canada is another corporation with an eye on hiring the neurodiverse.
In an attempt to better understand the barriers and workplace needs of neurodiverse workers, the accounting giant teamed up with Auticon Canada, a global technology consulting firm that employs people with autism and recently did a survey along with Deloitte of what the needs of employees with autism might be.
The survey, 'Embracing neurodiversity at work: How Canadians with autism can help employers close the talent gap,' was done between July and October 2021. It included 454 respondents with autism who completed the survey online, as did seven companies that had neurodiversity in their workforces were interviewed over videoconferencing.
In their survey, they found that 41.7 per cent of respondents were underemployed, meaning they were working on a part-time, contract or temporary basis or were doing jobs that were "under their educational capabilities," said Roland Labuhn who is a partner with Deloitte Canada.
One of the most eye-opening findings was that the hiring process itself could be a major barrier, as 40 per cent of those polled said the job interview was a "great challenge" for them.
"The people we surveyed felt that the interview was a trick or scary," said Labuhn, who worries that the typical job interview process could eliminate some highly qualified candidates with autism.
With a goal of getting better at both recruiting and retaining neurodiverse workers, companies like Deloitte and Ernst & Young are trying to change the interview process so that it focuses more on competence rather than how a candidate might behave in a certain scenario.
That kind of accommodation provides hope to people like Pinto and Forester.
The inspiration for their job fair came out of Pinto's concerns about his son's future. Xavi, 12, is on the spectrum and is "so creative," his father said.
He's "really focused on what he wants done."
And seeing more employers begin to sign up for the job fair gives him hope that he's helping to create a world in which his son can go after his dreams.
Read more:
Canada needs workers so why aren't more companies hiring the neurodivergent? - CBC News
Data Science Central – Big Data News and Analysis
The Myth of Analytic Talent Shortage Vincent Granville | March 31, 2022 at 7:01 pm
I tested the job market in the last two weeks, both as an applicant, and as a hiring manager. I share my experience here...
By Rex Ahlstrom, CTO & EVP Growth & Innovation, Syniti The modern enterprise is composed of a variet...
The Biden Administration made a recent announcement that it was setting up an exploratory committee for the creation of ...
DSC Weekly Digest 29 March 2022 Back in September, I made a prediction: Covid-19 would spike throughout the winter but f...
Inventory management is an essential part of any eCommerce business. Especially if you are an eCommerce business owner j...
Statistics gives business owners the freedom to evaluate how their websites are performing. The evaluation involves a co...
There is no denying the importance of the internet and IT in the business scene. Businesses hailing from all sectors are...
We are in the middle of a business model revolution. And we are active participants in that revolution. We have been...
It is common for growing organizations to reach a point where their existing data solution is no longer adequate for the...
Astronomy has seen an exponential rise in data collection over the last decade. This requires new methods for data analy...
Read the rest here:
Statistics and Data Science MicroMasters
About the Program
Demand for professionals skilled in data, analytics, and machine learning is exploding. The U.S. Bureau of Labor Statistics reports that demand for data science skills will drive a 27.9 percent rise in employment in the field through 2026. Data scientists bring value to organizations across industries because they are able to solve complex challenges with data and drive important decision-making processes. Not only is there a huge demand, but there is a significant shortage of qualified data scientists with 39% of the most rigorous data science positions requiring a degree higher than a bachelors.
This MicroMasters program in Statistics and Data Science (SDS) was developed by MITx and the MIT Institute for Data, Systems, and Society (IDSS). It is a multidisciplinary approach comprised of four online courses and a virtually proctored exam that will provide you with the foundational knowledge essential to understanding the methods and tools used in data science, and hands-on training in data analysis and machine learning. You will dive into the fundamentals of probability and statistics, as well as learn, implement, and experiment with data analysis techniques and machine learning algorithms. This program will prepare you to become an informed and effective practitioner of data science who adds value to an organization.
To complete the SDS MicroMasters program, learners will need to take the three core courses and one out of two electives. Once learners have passed their four courses, they will then take the virtually-proctored Capstone exam to earn the MicroMasters program credential in SDS. The credential can be applied, for admitted students, towards a Ph.D. in Social and Engineering Systems (SES) through the MIT Institute for Data, Systems, and Society (IDSS) or may accelerate your path towards a Masters degree at other universities around the world.
Anyone can enroll in this MicroMasters program. It is designed for learners who want to acquire sophisticated and rigorous training in data science without leaving their day job but without compromising quality. There is no application process, but college-level calculus and comfort with mathematical reasoning and Python programming are highly recommended if you want to excel.
All the courses of this program are taught by MIT faculty and administered by Institute for Data, Systems, and Society (IDSS), at a similar pace and level of rigor as an on-campus course at MIT. This program brings MITs rigorous, high-quality curricula and hands-on learning approach to learners around the worldat scale.
Visit link:
Data Scientists: Don’t Let Career Growth Keep You From Coding – Built In
Abe Gong says coding is a part of his identity even though its not a part of his current role.
A lot of data people start off as a coder, Gong, founder and CEO of data quality and collaboration company Superconductive, said. You spend a lot of time hands-on-keyboard. Along the way, you get very good at understanding the business, storytelling and working with other stakeholders. The skill set ends up looking a lot like management.
And therein lies the tension, he said. Career advancement can often mean moving into more managerial roles and can translate into less time doing the hands-on work of coding and interacting directly with data what many data scientists love and what drew them to the field in the first place.
But that doesnt have to be the case. Here are some strategies data scientists can use to stay hands on as they advance in their careers.
Advancing in your career doesnt always have to mean moving into management. For those who want to keep their hands in coding and dataon a daily basis, Giri Tatavarty, vice president of data science at retail data and analytics company 84.51, recommended investigating if your company has a technical track for career advancement. Those pursuing such an option can grow as individual contributors and basically become technical specialists.
These experts typically spend more of their time actually developing the skill and techniques and experimenting, than managing people or working with the other other responsibilities of a typical management job, Tatavarty said.
Tatavarty has pursued this technical track himself and is a senior individual contributor at 84.51. He described his average day as being 30 percent to 40 percent working on innovative projects he leads and 20 percent to 30 percent on conducting reviews of projects being led by others on the team. The rest of the time is spent working on strategizing for the technical vision of the company, like determining the best strategy for scalable data science methods.
Just like you need a top-notch brain surgeon to do your surgery or a very good engineer to build something really hard, you need a highly technical data scientist to solve business problems.
Its a mix of strategy, innovation, day-to-day delivery and reviews of what is being done, he said. While this is different from what he used to do as a more junior data scientist and has a broader scope, it all still involves daily coding.
[It] would not be a tenable situation to be in the technical track and not code, he said, adding that he would never want to. Answering business questions with data and math is a large part of what drew him to the field of data science, and the more he does it, the more questions he finds to solve.
More and more companies are starting to realize the value of offering a non-managerial career advancement track, he said. In his experience, companies have begun offering multiple advancement tracks over the past few years, especially at ones pursuing cutting-edge data science.
Just like you need a top-notch brain surgeon to do your surgery or a very good engineer to build something really hard, you need a highly technical data scientist to solve business problems, he said, giving self-driving cars and natural language processing as examples. Many of the frontiers of data science require very specialized skill and experience and also focused time to spend on that.
More on Data Science11 Data Science Programming Languages to Know
Even if more companies are recognizing the value of having a technical track for data science career advancement, that doesnt mean every company offers it. If your company doesnt, Tatavarty recommended pursuing the same ultimate goal of the technical track to become a technical specialist without the formal structure.
But that takes planning, he said, because you only have so much time. These specializations usually take a lot more focus and going in a single direction, so there needs to be a plan to spend time and focus on one direction rather than trying 10 things.
That plan starts with picking your domain or the area in which you want to specialize, he said. That could be technical areas like natural language processing, computer vision or A/B testing, according to Tatavarty, but it could also be business sectors like finance or advertising in which data science is applied. Creating a specialization plan also includes finding the right mentors and working with your manager to identify projects that will help you in your goal. Depending on the area of specialization you select, Tatavarty said it can take anywhere from a few months to a few years to become the go-to person at your company for that topic.
Definitely, whether you have technical track or you do not have technical track, you will be highly valued and you will grow, he said.
Its hard to specialize if you dont stay put for very long. There tends to be a lot of turnover in data science, with the average data scientist only staying at a company for 1.7 years, according to a 2021 report by 365 Data Science. This is in part because of the widespread opportunities in the field, according to Ramaa Nathan, director of data science at EVERSANA, a company that applies AI to rare diseases.
The problem is, if you keep switching a lot, you do not build expertise in any one thing, she said. Instead, she recommends that those who want to keep coding as they advance their career stick with a company. This not only builds up technical skills and seniority in the role, but also most importantly institutional knowledge within the company. That institutional knowledge is a specialization in its own right and will help keep you in a hands-on role.
Nathan gave her situation as an example. While she is a senior data scientist working with a team and does not do 100 percent of the coding like she used to in more junior roles, she guides and oversees the projects that others are implementing. Because of her institutional knowledge, she knowsdown to the smallest detail everything from the intricacies of client data to what her team has done on each project and is able to explain and correct mistakes directly. She described her position as director of data science as a complete hands-on data science role.
Within her own company, where Nathan has institutional knowledge and her direct involvement with coding and data is well-known, there is nothing preventing her from being hands on, she said. That would not necessarily be the case at another company.
If I were to jump and go to another company, I can tell you that if Im expecting the same role I cannot get hands on because that company might say, No, at this role, we cannot expect you to be hands on and somebody else is doing it, she said. Whereas by building up seniority in your own company, having that institutional knowledge, you can still be hands on in your higher roles.
Staying put to specialize wont work for everyone. So, another way to advance your data science career without leaving coding or daily data work behind is to put yourself in a situation where you structurally cant get away from the hands-on work of data science: Find a startup, a small company or a small team.
Theyre great opportunities for growth, but once you get to the top, youre challenged to be a jack of all trades and master of all, said Dylan Beal, vice president of analytics at Cane Bay Partners VI, a management consultancy. Working with small teams means it wont be easy to hand off all the data analytics and coding responsibilities. You get to be a senior contributor to the company while analyzing data, developing models and managing a small team.
Working with small teams means it wont be easy to hand off all the data analytics and coding responsibilities. You get to be a senior contributor to the company while analyzing data, developing models and managing a small team.
Elena Ivanovas experience as the head of data science for CarParts.com, an online provider of aftermarket auto parts, bears out this dynamic. When she started with the company, there was a limited budget for data science and data analytics so they were not able to hire more data scientists for a while. This meant she had to be working with everything related to data at the company, which is exactly how she likes it.
The data, for me, is everything, she said.
Coming from academia, Ivanova was drawn to the field of data science by the allure of applying algorithms to real-world business needs and because she loved being an investigative researcher. Leading a small team allows her to do that every day. Even as the company has grown, the data science team has stayed relatively small. So, while she now oversees a team, she has not left coding behind.
Ivanova described her days as starting with checking in with her team and helping them work through any challenges they might have, but being mostly consumed with exploring data and solving problems. The cost and time related to international freight, and the impact frequent changes in those can have on the company, are examples of those problems she has to solve with data to help the company make good decisions that will keep it profitable.
Whats more, each new addition to the small team rather than increasing Ivanovas managerial tasks and taking away from hands-on work actually increases her opportunity for coding and data work.
Getting a new person allows me to bring more ideas that I can tackle and start working on, she said. Right now its a data science team thats cross-functional, but still, we dont have a lot of people so I still have a lot of opportunities. Thats why Im still helping my people to look at data, look at the code [and] get some of the data modeling myself.
More on Data ScienceWhats the Ideal Ratio of Junior-to-Senior Data Scientists?
In some situations, management is an unavoidable element of becoming more senior as a data scientist. In those cases, technical mentoring of more junior data science professionals is a great way to keep your hands in coding and data.
Ashley Pitlyk, senior director of data science at Codility, a technical recruitment platform, described it as taking a player-coach role.
Ensure that youre still doing peer reviews of team code before models go into production, she said. Encourage your team to have development hours where they spend 1 to 2 hours a week learning something new and sharing it monthly or quarterly with the team and take part in it with your team.
Gong also recommended taking the player-coach role. Its something he does with junior members of his team working on deadlines without tight timelines. This can allow him to keep his coding and direct data work skills sharp.
It only takes a few months for you to be irrelevant if you stop coding.
Mentoring isnt just something that happens if you go down the management track either. Even though he pursued the technical trackat his company, Tatavarty said that mentorship is a big part of what he does as a senior individual contributor. For him, technical mentoring happens when he is working through the science reviews of ongoing data science projects, but also when those leading the projects run into a challenge. Both situations require that he be hands on with the coding and data and keep very familiar with it.
But mentoring and staying hands on in data science have an interesting reciprocal relationship. Mentoring can help keep you doing coding and direct data work, but you also have to stay active to make your mentoring matter.
It only takes a few months for you to be irrelevant if you stop coding, Tatavarty said. If you dont code, what youre saying is more opinion rather than substantiated claims from an expert or backed by data and you slowly lose your credibility or respect and people just listen to you because you are a higher pay grade. So you always need to be hands on.
While coding and direct data work is what draws many people to data science, and what many love about the role, Nathan urged other data scientists not to restrict themselves.
It helps, in a way, to explore other opportunities and not just be stuck within the same thing, she said. In her own career, her varied experiences which started with coding out models for high-frequency trading through to project management and entrepreneurship have made her more versatile in her current position as a senior data scientist. When a project manager on the data team left, for example, she was able to step in since she had experience doing project management.
Nathan also said that those other experiences were clarifying to her. She described her role as a project director as one where she was not able to work directly with the data.
The few times when I did get access and I was able to code something, that was the happiest day of my life, she said. That was her a-ha moment where she realized that, to be happy in her work, she had to be hands-on with data and coding.
So, going out, doing something different, in a way makes you realize how much you really like it, she said. It helps to take some time to at least try other roles to know what it is that you really want.
See the original post:
Data Scientists: Don't Let Career Growth Keep You From Coding - Built In
New data-sharing requirements from the National Institutes of Health are a big step toward more open science – Technical.ly
Starting on Jan. 25, 2023, many of the 2,500 institutions and 300,000 researchers that the US National Institutes of Health supports will need to provide a formal, detailed plan for publicly sharing the data generated by their research. For many in the scientific community, this new NIH Data Management and Sharing Policy sounds like a no-brainer.
The incredibly quick development of rapid tests and vaccines for COVID-19 demonstrate the success that can follow the open sharing of data within the research community. The importance and impact of that data even drove a White House Executive Order mandating that the heads of all executive departments and agencies share COVID-19-related data publicly last year.
I am the director of the Rochester Institute of Technologys Open Programs Office. At Open@RIT, my colleagues and I work with faculty and researchers to help them openly share their research and data in a manner that provides others the rights to access, reuse and redistribute that work with as few barriers or restrictions a possible. In the sciences, these practices are often referred to as open data and open science.
The journal Nature has called the impact of the NIHs new data management policy seismic, saying that it could potentially create a global standard for data sharing. This type of data sharing is likely to produce many benefits to science, but there also are some concerns over how researchers will meet the new requirements.
The National Institutes of Health has had data-sharing guidelines in place for years, but the new rules are by far the most comprehensive. (Photo via Wikimedia Commons)
The NIHs new policy around data sharing replaces a mandate from 2003. Even so, for some scientists, the new policy will be a big change. Dr. Francis S. Collins, then director of the NIH, said in the 2020 statement announcing the coming policy changes that the goal is to shift the culture of research so that data sharing is the norm, rather than the exception.
Specifically, the policy requires two things. First, that researchers share all the scientific data that other teams would need in order to validate and replicate the original research findings. And second, that researchers include a two-page data management plan as part of their application for any NIH funding.
So what exactly is a data management plan? Take an imaginary study on heat waves and heatstroke, for example. All good researchers would collect measurements of temperature, humidity, time of year, weather maps, the health attributes of the participants and a lot of other data.
Starting next year, research teams will need to have determined what reliable data they will use, how the data will be stored, when others would be able to get access to it, whether or not special software would be needed to read the data, where to find that software and many other details all before the research even begins so that these things can be included in the proposals data management plan.
Additionally, researchers applying for NIH funding will need to ensure that their data is available and stored in a way that persists long after the initial project is over.
The NIH has stated that it will support with additional funding the costs related to the collection, sharing and storing of data.
The open sharing of data has a history of promoting scientific excellence and was central to the Human Genome Project that first mapped the entire human genome. (Photo via Wikimedia Commons)
The NIHs case for the new policy is that it will be good for science because it maximizes availability of data for other researchers, addresses problems of reproducibility, will lead to better protection and use of data and increase transparency to ensure public trust and accountability.
The first big change in the new policy to specifically share the data needed to validate and replicate seems aimed at the proliferation of research that cant be reproduced. Arguably, by ensuring that all of the relevant data from a given experiment is available, the scientific world would be better able to evaluate and validate through replication the quality of research much more easily.
I strongly believe that requiring data-sharing and management plans addresses a big challenge of open science: being able to quickly find the right data, as well as access, and apply it. The NIH says, and I agree, that the requirement for data management plans will help make the use of open data faster and more efficient. From the Human Genome Project in the 1990s to the recent, rapid development of tests and vaccines for COVID-19, the benefits of greater openness in science have been borne out.
At its core, the goal of the new policy is to make science more open and to fight bad science. But as beneficial as the new policy is likely to be, its not without costs and shortfalls.
First, replicating a study even one where the data is already available still consumes expensive human, computing and material resources. The system of science doesnt reward the researchers who reproduce an experiments results as highly as the ones who originate it. I believe the new policy will improve some aspects of replication, but will only address a few links in the overall chain.
Second are concerns about the increased workload and financial challenges involved in meeting the requirements. Many scientists arent used to preparing a detailed plan of what they will collect and how they will share it as a part of asking for funding. This means they may need training for themselves or the support of trained staff to do so.
The NIH isnt the only federal agency pursuing more open data and science. In 2013, the Obama administration mandated that all agencies with a budget of $100 million or more must provide open access to their publications and data. The National Science Foundation published their first open data policy two years earlier. Many European Union members are crafting national policies on open science most notably France, which has already published its second.
The cultural shift in science that NIH Director Collins mentioned in 2020 has been happening but for many, like me, who support these efforts, the progress has been painfully slow. I hope that the new NIH open data policy will help this movement gain momentum.
More: