Category Archives: Data Mining
The role of Social Media analytics in new-age Digital Marketing – Times of India
In the new-age
landscape, where businesses strive to stay competitive and reach their target audiences effectively, social media analytics plays a pivotal role in helping
marketers gain valuable insights about customer behaviour, campaigns optimisation, and driving business growth. Digital marketing, a powerful strategy to reach target audiences online, relies on vast amounts of user-generated data on social media platforms and data-driven insights to make informed decisions.
For digital marketing companies and professionals, staying updated with the latest tools and techniques is crucial. So, a comprehensive
is invaluable for them in acquiring the skills to offer effective digital marketing services in today's competitive market. Moreover, they need a clear understanding of the role of social media analytics in shaping digital marketing practices, its benefits and the importance of data-driven decision-making.
Social Media Analytics Decoded
Social media analytics involves the collection, measurement, and analysis of data obtained from various social media platforms. It encompasses a wide range of metrics, including engagement rates, reach, impressions, click-through rates, sentiment analysis, and demographic information. Through the use of specialised tools and algorithms, marketers can extract meaningful patterns and trends from this data, enabling them to make informed decisions and optimise their digital marketing campaigns.
The Significance of Social Media Analytics
Social media analytics empowers marketers to track engagement, measure campaign performance, and identify trends, enabling them to fine-tune their strategies and deliver more personalised experiences. Here are the most important benefits of this method for digital marketing service providers.
Helps in Analysing Customer Behaviour
: By analysing social media data, digital marketers can understand how customers engage with their brand, what content resonates with them, and the factors that influence their purchasing decisions. This knowledge enables businesses to tailor their marketing strategies, personalise their messaging, and deliver targeted campaigns that strike a chord with their audience.
Enables Influencer Identification and Building Relationships
: Social media analytics also plays a vital role in identifying influential individuals or social media influencers who can promote a brand's products or services. By analysing follower counts, engagement metrics, and sentiment analysis, marketers can pinpoint the most relevant influencers for their target audience. Building relationships with influencers can enhance brand visibility, reach, and credibility, leading to increased customer trust and loyalty.
Promotes Real-Time Monitoring and Crisis Management
: In the fast-paced digital landscape, social media analytics provides marketers with real-time monitoring capabilities. By tracking social media conversations and mentions, businesses can identify potential crises or negative sentiment surrounding their brand. This allows them to address issues promptly, mitigate reputational damage, and maintain a positive brand image. Real-time monitoring also enables digital marketers to leverage positive brand mentions and capitalise on emerging trends or viral content.
Fosters Data-Driven Decision Making
: Social media analytics empowers a digital marketing company to make data-driven decisions by providing actionable insights derived from the analysis of social media data. Instead of relying on assumptions or guesswork, it can use analytics to identify key performance indicators (KPIs), measure campaign effectiveness, and optimize their digital marketing strategies. This data-driven approach increases the likelihood of success, enhances ROI, and minimizes risks associated with uninformed decision-making.
Enhances Audience Targeting and Personalization
: Social media analytics allows marketers to segment their audience based on demographics, interests, and behaviour, enabling more precise targeting. By understanding customer preferences and interests, businesses can deliver highly personalized content and offers, improving the overall customer experience. This level of personalization increases the chances of conversion, customer satisfaction, and long-term loyalty.
Conclusion
In the new-age digital marketing landscape, social media analytics has revolutionised the way businesses understand and engage with their target audience. From understanding customer behaviour to enabling data-driven decision-making, social media analytics leads to increased brand awareness, customer satisfaction, and business growth.
FAQs
How can social media analytics help in digital marketing?
Social media analytics provides valuable insights into customer behaviour, preferences, and trends, allowing businesses to optimise their digital marketing strategies, target specific audiences, and measure campaign performance effectively.
What is digital marketing?
Digital marketing refers to the use of various online channels and strategies, such as social media, search engines, email marketing, and content creation, to promote products or services and reach a wider target audience.
How can you use social media analytics tools to grow your business?
Social media analytics tools can be used to gain insights into customer preferences, track campaign performance, identify influencers, and make data-driven decisions, ultimately driving business growth and success.
How can big data analytics be useful in social media analytics?
Big data analytics in social media analytics allows for the processing and analysis of vast amounts of data, enabling businesses to uncover meaningful patterns, trends, and insights for effective decision-making and strategy development.
How do you extract trends from social media data?
To extract trends from social media data, one can utilise data mining techniques, sentiment analysis, and monitoring tools to identify recurring patterns, popular topics, and emerging trends based on user engagement and conversations.
More:
The role of Social Media analytics in new-age Digital Marketing - Times of India
Rise and fall of Byju’s A Corporate Governance Failure – Daily Excelsior
Prof. D MukhopadhyayThe rise of Byjus, an Indian edtech unicorn, had been a remarkable success story in recent years. Founded in 2011 by Byju Raveendran, the company quickly gained popularity with its innovative approach to online education, attracting millions of users and securing substantial investments from prominent venture capitalists. However, beneath its glossy exterior, Byjus journey is marred by allegations of corporate governance fiasco , which eventually led to its downfall. This write-up explores the proximate causes of the rise and fall of Byjus, and sheds light on the recent hue and cry leading to Indias most eye-catching valuable startup with a valuation of $22 billion i.e about INR 1,8 lakh crore defaulted in servicing a debt of $1.2 billion i.e about INR 10,000 crore. Byjus began as a modest startup, offering video-based educational content to students across India. The companys early success could be attributed to its adaptive learning technology and engaging teaching methods, which resonated with a growing market of tech-savvy students. Byjus managed to secure significant investments from prominent investors, such as Chan Zuckerberg Initiative and Sequoia Capital, enabling its expansion and aggressive marketing campaigns. As a result, the company quickly became a household name in the Indian education sector, boasting millions of users and a valuation of billions of dollars.Despite its soaring success, Byjus journey was marred by various instances of corporate governance failures stymied by opaque accountability and transparency in the companys financial reporting.. Multiple reports and investigations revealed that Byjus indulged in unethical accounting practices, such as , inflating its operational revenue to attract more investments and such fraudulent activities eroded the trust of investors and raised serious concerns about the companys integrity. Byjus faced allegations of mishandling sensitive users information, breaching privacy regulations and engaging in aggressive data mining practices. These revelations not only damaged the companys reputation but also triggered legal action and regulatory scrutiny. The lack of data protection measures and transparency further exposed the weak corporate governance framework within the organization.Delioitte , Byjus Auditor, resigned on the ground that startup had not provided its financials over a year. Besides, three independent directors representing investors and lenders also resigned . The firm has been subject to lawsuits by the investors. It failed to honour the financial covenants and interestingly, Byjus filed counter lawsuits against the creditors and lenders. There has been a considerable number of layoffs and the prevalence of utter-chaos in the corporate governance mechanism which rocked the corporate world. The fiasco reminded the importance of robust corporate governance and the role of independent corporate governance professionals Chartered Accountants (CAs), Cost and Management Accountants(CMAs) and Company Secretaries(CSs) besides the auditors who are crucial in guiding the corporate governance and strategic management of the firm by signaling appropriate warnings for potential derailment and guiding the strategic management in earning investors confidence and trust. As far as auditors are concerned, they assess the accuracy and reliability of financial statements and ensure compliance with the applicable accounting standards, provide an independent and objective evaluation of a companys financial records, internal controls, and financial reporting system. In the case of Byjus, auditors are responsible for conducting thorough audits, including examining the companys financial statements, verifying revenue recognition practices and assessing the authenticity of financial data and confidence of investors in the financial transparency and accuracy of the companys disclosures.Chartered Accountants (CAs) specialize in various aspects of financial accounting, corporate finance, and taxation and they could play a significant role in corporate governance affairs by providing expert advice on financial management, compliance with accounting standards, risk assessment and adherence to ethical accounting practices.Similarly, Cost & Management Accountants(CMAs) have a critical role in keeping the company on track and maintaining the momentum in securing strategic performance , monitoring, evaluating and reporting the periodic results . CMAs specialize in strategic management accounting, analyses and interpretations of financial information for management decision-making. Their expertise lies in cost management, beyond-budgeting, strategic performance evaluation, and financial planning. In the case of Byjus, CMAs could contribute to effective corporate governance by providing insights into cost efficiency, profitability analysis, enterprise risk management, assisting investors gauge the companys financial health and its ability to generate sustainable returns. Finally, Company Secretaries (CSs) play a significant role in ensuring compliance with legal and regulatory requirements, as well as facilitating effective corporate governance practices.They are responsible for maintaining statutory records, facilitating board meetings, ensuring compliance with company laws, and advising on corporate governance matters periodically. In the case of Byjus, CSs would ensure the companys compliance with relevant laws and regulations, facilitate transparency, maintain a strong corporate governance framework and contribute to building investors confidence in the companys operations. Further, many researchers examined the relationship between corporate governance practices and stock market returns of publicly traded firms and their findings revealed a negative correlation between weak corporate governance and stock performance, indicating that investors tend to lose confidence in companies with governance issues, which has exactly happened in Byjus.To overcome the challenges and regain stakeholders trust, Byjus should consider implementing the recommendations for strengthening corporate governance practices to ensure transparency, accountability, and adherence to ethical standards. This can be achieved by appointing independent directors with expertise in corporate governance. These directors can provide unbiased oversight and ensure that the company operates in the best interests of its stakeholders. Stringent internal controls, robust financial reporting mechanisms, and regular audits can restore confidence in its financial operations, regain trust of the investors. Byjus should prioritize transparency and disclosure of accurate and reliable information to investors, regulators, and the public. Byjus must adhere to accounting standards and present financial statements reflecting manipulated financial performance. Transparent reporting and disclosure practices would help rebuild credibility and foster trust among stakeholders, besides , strengthening data protection measures. The company should strictly comply with relevant data protection regulations and standards, such as the General Data Protection Regulation (GDPR) and the Personal Data Protection system in order to assure users that their personal information is safe and secure, which will not only help rebuild trust but also mitigate legal and regulatory risks besides, fostering a culture of ethical conduct within strong ethical framework in the organization.Byjus should create an environment that encourages employees to act ethically and report any violations and non-compliance they witness.To actively engage with the stakeholders, including investors, customers, employees, and regulators, establishing open lines of communication, providing updates on corrective measures, and gathering feedback to improve its operations are urgently called for.The company should develop a robust risk management framework to identify and mitigate potential risks, including conducting regular risk assessments, implementing risk mitigation strategies, and monitoring risk exposure. Byjus should proactively collaborate with regulatory authorities such as the Securities Exchange Board of India(SEBI), the Ministry of Corporate Affairs(MCA), and other relevant bodies and demonstrate its willingness to cooperate and adhere to regulatory requirements in order to rebuild credibility and foster a positive relationship with the authorities overseeing corporate governance practices.As far as investors are concerned, in the wake of Byjus fiasco and to seek justice and mitigate credit risk, they are recommended to protect their interests by consulting with legal experts specializing in corporate law, securities law, and consumer protection laws in securing their rights and the available legal remedies which may include filing lawsuits against Byjus for fraud, misrepresentation, or breach of fiduciary duties. Investors and creditors should actively engage with regulatory authorities such as SEBI , MCA and other appropriate authorities to ensure appropriate action against Byjus. Besides, creditors should individually consider a robust credit risk management practice involving due diligence on potential borrowers, assessing their financial health, and monitoring their performance regularly by professionals, besides diversifying their portfolios and avoiding excessive exposure to a single borrower or company in order to reduce the impact of any individual default or credit unworthiness.Investors and creditors can also explore options for obtaining compensation through negotiations with Byjus. The companys management, board and legal representatives should seek restitution or settlement. By presenting a strong case backed by evidence and highlighting the impact of Byjus fraudulent activities, investors and creditors may have a chance to recover some of their losses through negotiated settlements or financial restitution. Moreover, investors and creditors should stay informed about the ongoing developments related to Byjus and its legal proceedings. The rise and fall of Byjus serve as a cautionary tale, highlighting the dire consequences of corporate governance failures over again and again starting,,, with M/s Satyam Computers or dates back Harshad Mehtas fraudulent activities and financial scams once severely rocked the Indian Capital Markets which have all circumstantially been evidencing inefficient and ineffectiveness of the existing regulatory mechasim reactive in nature and many a time fail to maintain their proactive role as the watch dog of the corporate sector. Despite its initial success, Byjus unethical practices, lack of transparencyand disregard for data privacy ultimately led to its downfall. It may not necessitate any special attestation to the corporate governance failure at Byjus, underscoring the importance of a robust governance framework in any organization . This serves as a reminder for both investors and companies to prioritize integrity, transparency, and accountability to foster sustainable growth and maintain public confidence and trust. Overcoming the challenges faced by Byjus requires a comprehensive approach that prioritizes integrity, transparency, and accountability and the investors and lenders are advised to have constant vigilance over the commercial operation and flow of authentic and reliable information . It may not be too much to say that critical success factors, including risks, need to be identified and investors should do their homework before deciding to park their investments in any portfolio, keeping in view the spirit of the maxim- Caveat Emptor .(The author is a Bangalore-based Educationist and Management Scientist)
Read the original here:
Rise and fall of Byju's A Corporate Governance Failure - Daily Excelsior
Newcrest Mining (ASX:NCM) Shareholders Will Want The ROCE Trajectory To Continue – Simply Wall St
What trends should we look for it we want to identify stocks that can multiply in value over the long term? Amongst other things, we'll want to see two things; firstly, a growing return on capital employed (ROCE) and secondly, an expansion in the company's amount of capital employed. Put simply, these types of businesses are compounding machines, meaning they are continually reinvesting their earnings at ever-higher rates of return. So on that note, Newcrest Mining (ASX:NCM) looks quite promising in regards to its trends of return on capital.
For those that aren't sure what ROCE is, it measures the amount of pre-tax profits a company can generate from the capital employed in its business. Analysts use this formula to calculate it for Newcrest Mining:
Return on Capital Employed = Earnings Before Interest and Tax (EBIT) (Total Assets - Current Liabilities)
0.067 = US$1.1b (US$17b - US$794m) (Based on the trailing twelve months to December 2022).
So, Newcrest Mining has an ROCE of 6.7%. Ultimately, that's a low return and it under-performs the Metals and Mining industry average of 11%.
View our latest analysis for Newcrest Mining
In the above chart we have measured Newcrest Mining's prior ROCE against its prior performance, but the future is arguably more important. If you're interested, you can view the analysts predictions in our free report on analyst forecasts for the company.
While in absolute terms it isn't a high ROCE, it's promising to see that it has been moving in the right direction. The data shows that returns on capital have increased substantially over the last five years to 6.7%. The amount of capital employed has increased too, by 50%. So we're very much inspired by what we're seeing at Newcrest Mining thanks to its ability to profitably reinvest capital.
To sum it up, Newcrest Mining has proven it can reinvest in the business and generate higher returns on that capital employed, which is terrific. And investors seem to expect more of this going forward, since the stock has rewarded shareholders with a 54% return over the last five years. In light of that, we think it's worth looking further into this stock because if Newcrest Mining can keep these trends up, it could have a bright future ahead.
On the other side of ROCE, we have to consider valuation. That's why we have a FREE intrinsic value estimation on our platform that is definitely worth checking out.
While Newcrest Mining isn't earning the highest return, check out this free list of companies that are earning high returns on equity with solid balance sheets.
Find out whether Newcrest Mining is potentially over or undervalued by checking out our comprehensive analysis, which includes fair value estimates, risks and warnings, dividends, insider transactions and financial health.
Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) simplywallst.com.
This article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.
Read more here:
Newcrest Mining (ASX:NCM) Shareholders Will Want The ROCE Trajectory To Continue - Simply Wall St
Enhanced performance of gene expression predictive models with … – Nature.com
The advances in the field of Machine Learning have revolutionised other fields as well. With the increasing computational power and decreasing costs, the predictive power of modern-day deep learning networks allows scientists to apply those methods to various tasks that would be impossible to solve otherwise. Those advances did not omit the genomics field as well1,2. The first attempts to predict the expression solely on the DNA sequence started just after The Human Genome Project3however, they had a vast number of limitations4,5 and have mainly concentrated on the classical modelling approaches. However, those limitations started to disappear with the expansion of deep learning models. One of the first major studies on the usage of CNNs6 and XGBoost7 started a new era in predicting the expression with the introduction of ExPecto1. Then it continued with the use of CNNs through multiple models, including Basenji28, and finally with the use of transformer-based models like Enformer2. However, in our study, we have decided to take a standard approach available with the help of CNNs and expand it further with the input change to include spatial genomic information. The ExPecto model we decided to advance takes 20kbp surrounding the TSS of a given gene and uses expression from that to train a deep neural network to predict the epigenetic factors. Using those factors the tissue-specific gene expression profile is calculated with a high Spearman correlation score. In our study, we have investigated if the epigenetics marks alone are sufficient for the complex task of prediction of the expressionand have given a hypothesis that while they are incredibly informative, there is still a place for improvement. We decided that we would like to investigate the effects of the spatial chromatin architecture inside cell nuclei on the expression by exploring the models created with 3D information available and without it. To do that, we have modified the ExPecto algorithm accordingly, so it uses not only the 20kbp region around the TSS but also regions that are linearly distalbut are, in fact, spatially close, thanks to the spatial interactions that are mediated by specific proteins of interest. The overview of the algorithm proposed by us, SpEx (Spatial Gene Expression), is shown in Fig.1.
The architecture of SpEx. The spatial heatmaps are used for obtaining the regions close to the TSS (excluding+20kbp from TSS), and sequence from those regions is taken, and put into classic deep learning ExPecto modelwhich generates epigenetic signal over those regions. The classical features from ExPecto are merged with those obtained from spatially close regions, and the decision trees predict the expression levels. See Methods for more information about the algorithm.
To prove the model's validity, we decided to create an empirical study on how specific protein-mediated interactions are helping in the prediction of gene expression. To do that, we have selected the three most important proteins for loop creationcohesin, CTCF, and RNAPOL2. The effects of those proteins being unable to bind or be created properly were shown in multiple studies and were the inspiration for asking whether the machine learning models, provided we add 3D information (from interactions mediated by those proteins), will improve.
Cohesin is a protein complex discovered in 19979,10 by two separate groups of scientists. The complex is made out of SMC1, SMC3, RAD21, and SCC3. However, in human cell lines, SCC3 (present in yeast) is replaced by its paraloguesSA111, SA212, and SA313. However, SA3 appears only in cohesin during mitosis14, and we will concentrate on SA1 and SA2 since they are forming cohesin in somatic cells. The complex is essential in the proper functioning of the cell nucleusas is fundamental for the loop extrusion15, it stabilises the topologically associating domains (cohesin-SA1)16, allows interactions between enhancers and promoters (cohesin-SA2)16. The depletion of cohesin in a nucleus removes all the domains17, and completely destroys the spatial organisation of the chromatin. Mutations of cohesin negatively affect the expression of the genese.g. in Cornelia de Lange syndrome18,19 and cancer20, where the altered complex is incapable of sustaining its proper function, leading to diseases.
CTCF (CCCTC-binding factor) is an 11-zinc finger protein. Its primary function is the organisation of the 3D landscape of the genome21. This regulation includes: creating topologically associated domains (TADs)22,23,24, loop extrusion25, and alternative splicing26. The protein very often works with the previously mentioned cohesin complex, allowing loop formation. CTCF, as a regulator of the genome, binds to specific binding motifs and regulates around that loci. That is why, in case of mutations in the motifs, it might bind improperly, thus allowing disease development. However, not only mutations in the binding sites are disease prone. Mutations in the CTCF protein itself have proven to significantly influence the development of multiple conditions. Some of the examples of diseases induced by a mutation in the CTCF proteins include MSI-positive endometrial cancers27, breast cancers28,29, and head or neck cancer30.
Therere are three common RNA Polymerase complex proteins in eukaryotic organismsI, II, and III31. In this study, we will focus mainly on RNAPOL2, as that is responsible for the transcription of the DNA into messenger RNA32,33, thus having the most significant impact on the expression of the genes. The mechanisms responsible for creating the RNAPOL2 loops are complex and require not only RNAPOL2 protein but also several other transcription factors34,35. The mutations in those transcription factors have been shown to be linked to various diseases36, including acute myeloid leukaemia37,38,39, Von HippelLindau disease40,41, sporadic cerebellar hemangioblastomas42, benign mesenchymal tumours43, xeroderma pigmentosum, Cockayne syndrome, trichothiodystrophy44, and Rubenstein-Taybi syndrome45.
Multiple studies have shown the spatial landscape created by cohesin-mediated chromatin loops. The first major cohesin ChIA-PET study from 201446 showed the internal organisation of chromatin in the chromosomes. For example, the study provided a list of enhancer-promoter interactions, which can be a starting point for gene expression study.
The next study from 202047 extended the 2014 study and showed that among 24 human cell types, 72% of those loops are the same; however, the remaining 28% are correlated to the gene expression in different cell lines. Those loops mostly connect enhancers to the promoters, thus regulating the gene expression. Another interesting insight from this study is that those different profiles of interactions are effective in clustering the cell types depending on the tissue they were taken from.
CTCF, as mentioned above, is responsible for loop extrusion. That is why it is very popular to investigate CTCF-mediated interactions. Once again, like with the cohesin complexes, ChIA-PET is used for obtaining the interactions mediated by CTCF. One of the major studies from 201548 shows the genomic landscape among 4 cell lines. They discovered that SNPs occurring in the motif of the CTCF-binding site can alter the existence of the loopand by that, contribute towards the disease development. They assessed the SNPs residing in the core CTCF motifs and found 70 of those SNPs. Of those, 32 were available from the previously done GWAS studies, and 8 were strongly associated (via linkage disequilibrium) with disease development.
Another study from 201949 analysed mutations using 1962 WGS data with 21 different cancer types. Such an analysis, enhanced with the usage of CTCF ChIA-PET data, showed that disruptions of the insulators (that are creating the domains) by motif mutations and improper binding of CTCF (and, by that, diminish of the loop) lead to cancer development. Using a computational approach, they have found 21 potentially cancerous insulators.
The transcription chromatin interactions, such as the ones mediated by RNAPOL2, are of great interest as wellthey control the transcription directly, after all. The study from 201250 showed the RNAPOL2-mediated ChIA-PET interactions on 5 different cell lines to show the transcriptional genomic landscape. Another study from 202051 performed the same experiments on RWPE-1, LNCaP, VCaP, and DU145 cancer cell lines. Similar to the 2012 study, they have shown the spatial interactions based on RNAPOL2, but this time in cancer cell lines. Furthermore, they showed that cohesin and CTCF interactions provide a stable structural framework for the RNAPOL2 interactions to regulate the expression, thus making all of the proteins that we describe in this section crucial for the proper expression of the genes.
Those findings were the main motivation for our analysisas based on the evidence, the cohesin, CTCF, and RNAPOL2 interactions should give us more information on the genetic expression, thus improving the metrics for the machine learning models. In this work, we present an extension of the ExPecto1 deep learning model that is enriched with spatial information, thus, as expected, improving the statistical metrics.
ExPecto1 is a model introduced in 2018 for predicting gene expression from the sequence. It uses a deep neural network (namely, Convolutional Neural NetworkCNN). It is composed of, most importantly, 6 convolutional layers, 2 MaxPoolings (the activation function for all the layers is ReLU). For the exact architecture, see the original paper. As mentioned, the input to the network is the DNA sequence, and the output is in the form of the 2002 epigenetic factorscollected from ENCODE and Roadmap Epigenomics. The network takes 2000bp as the window and predicts the epigenomic of its 200bp middle, using the remaining base pairs as the context. The model is then applied to 20,000bp region surrounding TSS, and the step size is determined by the aforementioned 200bp, yielding 2002 features multiplied by 200 bins (100 left and 100 right), so the total number of features describing the gene is 400,400. Then, those features are transformed using exponential functions (10 upstream and 10 downstream TSS), so the final number of the features is 40,040. Then, xgboost (namely, gradient boosting of linear regression models) is used for the prediction of the expression of gene expression. They obtained a Spearman correlation score of 0.819, and the testing was done on chromosome 8.
View post:
Enhanced performance of gene expression predictive models with ... - Nature.com
Opinion: How the new Data Protection Bill sidelines Supreme Court judgment on right to privacy – India Today
By Radhika Roy, Tanmay Singh: The wait for Godot seems to have finally come to an end with news reports stating that the Union Cabinet has approved the Digital Personal Data Protection Bill, 2022 (DPDP Bill). The Bill, which has seen numerous changes since the recommendation of the Justice BN Srikrishna Committee on Data Protection in July 2018, is slated to be introduced in the Monsoon Session of the Parliament later this month.
Last year, the Data Protection Bill, 2021, was withdrawn by Union Minister for Communications and Information Technology Ashwini Vaishnav in the Lok Sabha. Thereafter, on November 18, 2022, the DPDP Bill was released for public consultation. While we are yet to see if any radical changes have been implemented, the DPDP Bill, as structured, reads more as a Data Processing Bill rather than a Data Protection Bill.
Read More
One aspect of the Bill that manages to take the cake is the provision pertaining to deemed consent. The vastness and vagueness surrounding the usage of deemed consent renders the fundamentals of consent to be illusory. Simply speaking, by deeming consent for subsequent uses, your data may be used for purposes other than what it has been provided for and, as there is no provision for you to be informed of this through mandatory notice, you may never even come to know about it.
Moreover, what makes the provision glaringly outrageous is that its understanding of consent surrounding personal data is contrary to the right to informational privacy and the principle of specific informed consent, as established by the Supreme Court in the landmark case of KS Puttaswamy v. Union of India. With over 760 million active users, and a push for Digital India by digitising records and launching platforms such as DigiLocker, BHIM, Parivahan Sewa, etc., a citizen-oriented data protection regime is the need of the hour.
Clause 5 of the DPDP Bill states that the personal data of a Data Principal, i.e. the person to whom the personal data relates, may be processed for a lawful purpose for which the Data Principal has given either their consent or deemed consent. Clause 8 of the DPDP Bill illustrates when a Data Principal is deemed to have given consent. These illustrations range from responding to a medical emergency, involving the Data Principal or any other individual, to breakdown of public order.
The array of reasons for which consent is deemed to be given is extensive, and unlike Clause 7(4) of the DPDP Bill, consent under Clause 8 cannot be withdrawn, though it is not expressly clear why deemed consent is not revocable. Clause 8(a) states that the Data Principal is deemed to have given consent for purposes it has not explicitly indicated that its data cannot be used. Accordingly, by employing a negative checklist and making the Data Principal categorically note purposes for which they do not want their data to be processed, the DPDP Bill places the burden on the Data Principal to ensure that they cover all grounds, failing which their personal data will be processed for purposes with which they may not agree.
What makes matters worse is that the situations for which an individual can be deemed to have given consent are vague and broad. Stretched to its logical conclusion, deemed consent can be interpreted as anything under the sun. What does public order entail, what is the exact extent of purposes of employment; all of these phrases have been left undefined and indeterminate. Such unestablished grounds will allow Data Fiduciaries, i.e. the entity authorised to collect and process personal data, to interpret this clause to their convenience and justify processing personal data for reasons that may not be palatable to the Data Principal, and even evade legal consequences for the same.
With burgeoning fast-paced technology, the right to privacy pertaining to personal information of an individual has assumed significance, i.e. informational privacy. While deciding the constitutional validity of Aadhaar, a biometrics-based ID made by the State, the Supreme Court held in K.S. Puttaswamy v. Union of India that the right to privacy included the right to informational privacy. It was further observed that in the age of information, dangers of violating this right did not just originate from the State, but also non-state actors.
It was in this context that the Supreme Court had recommended to the Union Government to construct a robust data protection regime for which the committee chaired by Justice BN Srikrishna was constituted. While legitimate state interest in accessing personal data for processing was emphasised, the Supreme Court also noted that providing notice and seeking informed consent formed an intrinsic part of procuring this personal data. This observation was based on a report of a Group of Experts on Privacy (dated October 16, 2012), which had been constituted by the Union Government itself.
As per Clause 6(1), every request by the Data Fiduciary to a Data Principal for consent is to be accompanied or preceded by an itemised notice. This notice must contain a description of personal data sought to be collected as well as the purpose of processing such personal data. Clause 7 provides a clear and concise definition of consent, and carves out the responsibilities of a Data Fiduciary while processing personal data based on consent. Clause 7(4), as discussed before, stipulates that consent given by a Data Principal for processing personal data can be withdrawn at any time.
Giving the landmark judgment on right to privacy a complete go-by, none of the aforementioned safeguards have been included in the case of deemed consent, which is the antithesis of informed consent. Not only are you not notified before your personal data is processed or about the purpose for which it is being utilised, you are also left without the option to exercise withdrawal of consent. Additionally, the concept of purpose limitation is laid to waste as deemed consent extends to all purposes, barring those for which the Data Principal has indicated that they do not wish to give consent. The burden of covering all bases while giving consent is on the Data Principal.
Further, the fact that the DPDP Bill fails to differentiate between personal data and sensitive personal data such as name, bank details, and biometric data, invalidates the higher expectation of privacy attached to the latter. This becomes an issue as, even though Clause 9(5) delineates the obligation of a Data Fiduciary to ensure that personal data in its possession remains protected, security breaches are regular occurrences, and in 2017, India experienced 37% of the data breaches in the world.
In June 2023, it was reported that after a major privacy breach on the CoWIN app, a bot leaked personal details on Telegram of all individuals who had been vaccinated against COVID-19. In May 2023, an e-commerce retailer, Zivame, suffered a data breach, and the personal information of thousands of Indian women who had used Zivame was put up for sale. Just a week ago, 12,000 confidential records of State Bank of India employees, including screenshots of SBI passbook and Aadhaar card, were made public on Telegram. Such leaks, coupled with the lax approach to obtaining consent of vulnerable and unaware users, can leave its victims at the receiving end of identity thefts, extortion, etc.
The DPDP Bill creates an illusion of imposing obligations on a Data Fiduciary and strengthening a consent-centric statutory framework. However, in reality, instead of protecting the citizens, or Digital Nagriks, from function creep and data mining by State and non-state actors, the DPDP Bill legitimises the mal-intent of a Data Fiduciary and reduces the burden, both legal and penal. The Bill requires major reconfiguring and the judgment in K.S. Puttaswamy v. Union of India should be the guiding principle while devising the provisions. It seems that the State has forgotten that the Bill is for the protection of people, and not corporates and itself.
(Tanmay Singh is Senior Litigation Counsel and Radhika Roy is Associate Litigation Counsel at the Internet Freedom Foundation, Delhi)
(Views expressed in this opinion piece are that of the author.)
Link:
County, Rison Both Approve Ordinances on Data Mining – Cleveland County Herald
DATA MINING DISCUSSION - Justice of the Peace Donnie Herring (left) of talks to County Attorney Tiffany Nutt about some of the issues surrounding an ordinance that establishes building guidelines for data mining centers in Cleveland County. Herring, along with Justice Charles Rodgers of Rison, were both critical of the centers. Herring cited the fact that the centers are a tremendous drain on a community's electrical infrastructure and the centers do not provide any tax or employment benefits.
RISON - The Cleveland County Quorum Court and the Rison City Council both approved ordinances this week placing noise limitations and other restrictions on data mining centers that might come to the county or city.Data mining centers are facilities that house large banks of computers that run complex code for cryptocurrency like Bitcoin or process large amounts of digital data for businesses. Most data mining centers house thousands of computers which require a tremendous amount of electricity and also create high levels of noise to keep the computers cool.The ordinances approved by the Cleveland County Quorum Court and Rison City Council are essentially identical. County Judge Jimmy Cummings said the model ordinance was put together by a legal team with the Association of Arkansas Counties.The ordinance requires the owner of the property where the data center will be located to notify the neighbors of the intent to place a center there. It also requires the property owner to conduct noise level tests along the boundary of the property both before and after the center is built. There are also stipulations regarding what noise-reducing measures that must be included in the building plans.The complete ordinance can be found in its entirety in our public notices database, or in the printed edition.Local governments have until Aug. 1 to approve such guidelines before Act 851, a new state law governing data mining centers, goes into effect.Data mining centers have become a hot topic for local governments this year after residents living around some of the centers that have already been established in Arkansas and other states have complained about the loud noises coming from the facilities.Most data mining centers house thousands of computers. With that many computers working at the same time, a great amount of heat is created that must be pulled from the building to allow the computers to function properly. Currently, the most common method to do that is
Here is the original post:
County, Rison Both Approve Ordinances on Data Mining - Cleveland County Herald
Deep-sea mining could soon be approved how bad is it? – Nature.com
An excavating machine on the Japanese research vessel Hakurei collected cobalt-rich ocean sediments in a 2020 test run.Credit: Kiyoshi Ota/Bloomberg via Getty
Commercial mining of the sea floor could soon get the green light. The International Seabed Authority (ISA), a body associated with the United Nations that oversees deep-sea mining in international waters, is now meeting in Kingston, Jamaica, where it could decide whether companies can begin excavating the sea floor for minerals and metals such as cobalt, nickel and sulfides.
Proponents say that this move could help with meeting the growing demand for rare-earth metals used in batteries both for electric cars and for storing renewable energy, aiding the shift to a low-carbon economy. However, research hints that the potential ecological impacts of deep-sea mining are larger than previously thought. Nature explores just how bad deep-sea mining could be.
Scientists say that very little is known1 about deep-sea ecosystems, making it difficult to assess how they will be affected by mining. However, a few new studies are providing clues to the damage that large-scale mining might cause.
A study2 published today in Current Biology is the first to examine the environmental effects of mining cobalt-rich crusts. These rock-hard, metallic layers, which form on the side of underwater mountains called seamounts, are among three deep-sea resources that have been proposed to the ISA as a target for mining. In 2020, a two-hour operation funded by the Japanese government excavated a roughly 120-metre-long strip of cobalt-rich crust on a seamount in the northwest Pacific Ocean, as a test run for mining activities.
To investigate the operations effects, scientists reviewed video footage collected by a remotely operated vehicle. They found that, in the year after the excavation, the density of active swimming animals, such as fish and shrimp, dropped by 43% in areas directly affected by sediment kicked up by mining, and by 56% in adjacent areas.
Animals observed during a study of the effects of deep-sea mining included those in the categories Actiniaria, Holothuroidea, Pentametrocrinidae (top row, left to right), Euplectellidae, Notocanthiformes and Aspidodiadematidae (bottom row, left to right).Credit: JOGMEC
Travis Washburn, a benthic ecologist and co-author of the study who at the time was at the National Institute of Advanced Industrial Science and Technology in Tsukuba, Japan, says that he didnt expect to see any ecological impacts from such a small mining operation. He suggests that the fish and shrimp swam away from the area because the mining and sediment pollution might have affected their food supply. The results show that effects are felt beyond the mining areas by a pretty substantial amount, he says.
A study3 published last week suggests that tuna near the ocean surface could gravitate to areas likely to be affected by mining. Writing in npj Ocean Sustainability, scientists project that climate change will drive large numbers of the fish into the ClarionClipperton Zone (CCZ), a 4.5-million-square-kilometre area in the eastern Pacific Ocean between Hawaii and Mexico where much of the mining interest is focused. The study predicts that by middle of the century, the zones total biomass of skipjack (Katsuwonus pelamis) will rise by around 31% and yellowfin (Thunnus albacares) by 23%.
Tuna catch rates soared after creation of no-fishing zone in Hawaii
Data about deep-sea minings effects on animals in the upper layers of the sea are scare. But co-author Diva Amon, a marine biologist and a scientific adviser to the Benioff Ocean Science Laboratory at the University of California, Santa Barbara, says that deep-sea mining could harm tuna and other organisms, such as Pacific leatherback turtles (Dermochelys coriacea).
Plumes of sediment stirred up by mining could contaminate sea water and damage fishes gills and filter-feeding apparatus, says Amon. The same problems could occur when mining waste is thrown back into the water. Furthermore, noise from the mining operations could alter the tunas feeding and reproductive behavior, she adds.
Deep-sea mining will potentially have impacts from the sea surface right down to the sea floor, says Amon.
Proponents of deep-sea mining, such as The Metals Company, a mining start-up based in Vancouver, Canada, that is seeking permission to harvest metals on the sea floor, argue that deep-sea mining will benefit the environment by helping the shift to a green economy. They also argue that sediment plumes effects can be minimized and that mining contractors dont currently propose to release waste from exploiting mineral-rich sea floor deposits called polymetallic nodules.
Amon says that deep-sea mining is unlikely to replace terrestrial mining, so comparing one with the other is not helpful. Both will proceed, and well see double destruction in two different parts of the planet, she says.
Washburn says that deep-sea mining might cause less direct damage to people than does terrestrial mining. But by spoiling huge swathes of the sea floor, it could disrupt marine processes such as carbon sequestration, which helps to offset humans greenhouse gas emissions.I dont think anybody has enough information to say which one is better or worse, he says.
Scientists first need to know more about what lives in the deep ocean, says Amon. Then they can begin to investigate how extensive mining can be before it causes serious harm to key ecosystem functions, such as the oceans ability to sequester carbon. The challenge, she says, is that deep-ocean science is slow and expensive, and scientists need more time and money to understand minings consequences.
Electric cars and batteries: how will the world produce enough?
Matthew Gianni, co-founder of the Deep-Sea Conservation Coalition, a conservation group based in Amsterdam, says that deep-sea mining could become unnecessary thanks to advances in recycling and the advent of batteries that use iron and phosphate instead of nickel and cobalt. Furthermore, improvements in environmental standards for terrestrial mining will lessen the industrys ecological damage.
Washburn, who started his career studying ecological disasters such as 2010s Deepwater Horizon oil spill, is buoyed by the efforts to assess potential impacts before the mining operations begin. Historically, humanity tends to act first and consider the consequences later, he says.
Were actually trying to figure it out beforehand so thats a pretty good place to be, he says.
More:
Deep-sea mining could soon be approved how bad is it? - Nature.com
Development of a prediction model for the depression level of the … – Nature.com
The results of the demographic characteristics of the study and the average difference in depression levels are shown in Table 2. Females accounted for a higher number than males; females (n=2008, 67.5%) and males (n=967, 32.5%). For the age distribution, ages of 80 or older was the largest with 1475 people (49.6%), followed by ages of 7579 with 755 people (25.4%) and ages of 7074 with 459 people (15.4%). Regarding the level of education, 2107 people (70.8%) had elementary school graduation, and 471 people (15.8%) had middle school graduation. indicating that the majority had a low level of education. When asked about the number of household members, two people accounted for the largest portion with 1459 people (49.0%), showing that the majority lived with one more person. As for having disabilities, 2425 people (81.5%) mentioned no, and 550 people (18.5%) stated yes. In terms of participation in economic activities, not participating accounted for more than half of the participants (n=2042, 68.6%).
In terms of depression, 2553 people (85.8%) reported no and 422 people (14.2%) reported yes. Lastly, the average difference between the sociodemographic characteristics and the depression level of the participants was evaluated. As a result, there were significant gender differences (t=3.547, p<0.001) and participation in economic activities (F=7.326, p<0.001), but no differences were found in other factors.
The descriptive statistical results of the main factors are shown in Table 3. Considering that the range of scores for health promoting behavior is from a minimum of 0 to a maximum of 3, an average of 2.2 points can be regarded as a high value. On the other hand, having a standard deviation of 0.72, it can be understood that there was no significant difference in health promotion behavior by the elderly in low-income households. With reference to subjective health awareness, it was found that numerous elderly people had a higher awareness than the average, with an average of 2.8 points. Regarding the level of medical expenses, it was found that the average monthly expenditure was 158,000 won, and the standard deviation was 20.17, indicating a high difference in expenditure among the elderly in low-income households. Family support, social support, and leisure life satisfaction showed average scores of 2.7, 2.6, and 2.3, respectively, which were verified to be in good standing, considering that the range of scores was at least 0 to up to 4.
The relative importance of the predictive factors that contributed to predicting depression in low-income seniors utilizing the feature selection, is shown in Table 4. The higher the order of importance of a predictor, the greater the influence of that factor in predicting the level of depression; the highest ranking was identified as 'leisure life satisfaction.' This result can be interpreted as having the greatest effect on satisfaction in leisure life than other factors when predicting the level of depression of the elderly in low-income households. Furthermore, the factors of subjective health awareness, family support, and social support were found to be in the upper ranks. However, it was noted that the factors of presence or absence of chronic diseases, educational level, disability, and health behavior were distributed in the low ranking. A SHAP summary plot was created (Fig.2), a visualization of how much each explanatory variable affects the prediction of depression. A yellow bar indicates a positive influence on the occurrence of depression. The red and orange bars indicate a negative impact on the occurrence of depression. The red bars were found to be the most influential variables. Regarding leisure life satisfaction, it can be used as an explanatory or a dependent variable. This study used it as an explanatory variable because the subjects were low-income elderly. The relationship between leisure life satisfaction and depression in low-income elderly is often reported as causal, with leisure life satisfaction affecting depression29.
The summary plot of the SHAP values.
In this study, the classification techniques used to develop the most accurate predictive model, predicting the level of depression of the elderly in low-income households, were artificial neural networks, decision trees, logistic regression and random forest analysis. Table 5 is the result of the classification analysis by sequentially applying the wrapper's stepwise method to the relative importance of the factors identified in Table 4. Based on the analysis, it was identified that the decision tree algorithm showed higher predictive power than the other three algorithms. In the case of logistic regression analysis, the prediction accuracy was 73.2%, and the artificial neural network showed 81.8%. On the other hand, the decision tree shows a tendency to increase predictive accuracy as the number of factors increases, except when there is only one input factor. When all 13 factors were input, an accuracy of 97.3%, a sensitivity of 100%, and a specificity of 94.6% were presented. Finally, when forming the decision-making tree, the factor that had the greatest impact was the subjective health awareness factor, followed by leisure life satisfaction, family support, and social support. To ensure that the main outcome was reliable and robust, a sensitivity analysis was conductedby dividing the dependent variable, depression incidence, into two thresholds (15 points or less, 16 points or more); the analysis revealed that the main outcome did not change in Tables 6, 7.
Logistic regression analysis was performed to seek the influence of the predictors of high risk of depression in the elderly from low-income households, and the results are shown in Table 8. The factors that affected the level of depression were gender, number of household members, subjective health awareness, family support, social support, and satisfaction with leisure life. In the case of gender, the probability of developing depression in women was confirmed to be 1.86 times (OR=1.861, 95% CI=1.1732.954) higher than in men. As the number of household members increased by each level, the probability of depression decreased by 0.69 times (OR=0.692, 95% CI=0.5130.933). In subjective health awareness, an increase of each level was associated with a 0.40-fold (OR=0.403, 95% CI=0.3120.522) lower probability of depression. Further, family support (OR=0.613, 95% CI=0.4940.759), social support (OR=0.711, 95% CI=0.5520.916), and leisure life satisfaction (OR=0.425, 95% CI=0.3280.425) showed that the probability of depression decreased by 0.61 times, 0.71 times, and 0.42 times, respectively, as the level increased by each level.
Here is the original post:
Development of a prediction model for the depression level of the ... - Nature.com
Process Mining Builds On Process Mapping With Data for Greater … – Acceleration Economy
Even post-pandemic, chief procurement officers (CPOs) are still dealing with shortages, long lead times, and inflation. At the same time, businesses are mandating cost reductions (or at minimum, cost containment). Suppliers are having similar challenges. So, if you cant ask suppliers to reduce the price for incoming goods and services, what can you do?
Transforming business processes is the next frontier of savings opportunities for CPOs. Process mining is a great enabler to provide insights needed to transform processes. It can identify opportunities faster, more accurately, and more comprehensively than tools used in the past. Ill go into more detail on how process mining works below, but first its important to understand the difference between process mapping and process mining.
See how process mining pioneer Celonis made the Cloud Wars Horizon list by accessing the Acceleration Economy Cloud Wars Top 10 Course. Cloud Wars Founder Bob Evans and Acceleration Economy Editor-in-Chief Tom Smith outline the criteria and innovations that earned Celonis a spot on the horizon of the biggest cloud platform companies on the planet.
Until recently, when there was a perceived need to improve a process, a business would usually go through a process mapping event. Thats a very people-driven exercise where someone is responsible for documenting a process from start to finish. They talk to the individuals involved in the process, identify the technology used, and create a visual workflow of how something gets done.
Then this map can be analyzed for breaks in the chain, redundant steps, bottlenecks, or other problems. Because it relies on input from humans, its the best representation of what people believe to be true. But it will never be completely accurate because its based on peoples interpretations of how things work. Its qualitative, not quantitative.
Process mining is the digital sibling of process mapping. Where process mapping is all about perceptions, process mining is all about data. In process mining, software containing data-mining algorithms is applied to the event logs of IT systems used by a company. The algorithm creates a workflow model based on the trends it sees in the event logs data.
By harvesting data, process mining can create a more accurate map of whats happening because it is quantitative, not qualitative. Whereas process mapping paints a picture of what the organization believes to be true, process mining reveals what is actually happening.
Research by Celonis, a process mining software provider and member of Acceleration Economys Top 10 AI/Hyperautomation Short List, identifies some great process mining use cases for procurement. These include:
One of my favorite use cases is identifying process automation opportunities based on repetitive tasks where the same kind of input or query is done over and over. Invoice entry is a task that comes to mind; process mining can dig deeper and identify similar opportunities that may not be evident.
Process mining isnt for every business. It relies on data-mining algorithms, which need data to function. The more manual the business, the less reliable process mining results are. But for large or complex organizations, process mining is an opportunity to see what is truly happening and identify opportunities to transform processes.
See the original post:
Process Mining Builds On Process Mapping With Data for Greater ... - Acceleration Economy
Alkymi launches Alpha, a Generative AI-powered solution built for … – PR Newswire
Alpha enables financial services firms to get instant answers utilizing next-generation large language models.
NEW YORK, July 12, 2023 /PRNewswire/ --Alkymi, the leading business system for unstructured data, today announced the launch of Alpha, a generative AI-powered solution that allows financial services teams to uncover instant answers from documents and proprietary data sets through an interactive chat experience and structured workflows. Using Alpha, customers will be able to accelerate critical business activities such as conducting due diligence, interpreting investment performance, crafting quarterly reports, as well as answering client inquiries.
Until now there has been no way to securely apply large language models (LLMs) to targeted documents and data sets in financial services with a turnkey workflow. Due to these concerns, financial services firms have had to sacrifice access to insights that could have a major impact on business outcomes.
Alkymi has solved this challenge by allowing firms to create their own unique, private, and secure data sets from over 25 supported file formats for data mining and insight generation. Alpha helps customers systematically generate answers from these document data sets in a user friendly way. Alkymi is empowering customers to use LLMs with confidence, keeping their data encrypted and ensuring it is not used to train a model. To build confidence in model output, answers can be traced directly back to their sources in documents.
Customers get insights instantly and can continue to drill down for more context where needed while being assured of the security of their proprietary data. Alpha features include:
"One of the key obstacles that financial services firms face when adopting generative AI is the lack of control - how you apply the technology safely and transparently," says Harald Collet, CEO of Alkymi. "We built Alpha to remove these obstacles through an industry-focused user experience that empowers customers to capture the massive opportunity of generative AI."
"We see tremendous potential in using LLM-powered tools," says Robin Williams, Managing Director of Asset Vantage. "Alkymi's focus on financial services use cases makes Alpha particularly exciting because they understand the business outcomes clients like us need, including the security and governance requirements of financial services firms and their data."
Alpha by Alkymi enables deeper discovery and empowers customers to generate insights, all within a secure environment and with full traceability. It is currently widely available and more information can be found at alkymi.io/product/alpha.
About AlkymiAlkymi is the business system for financial services firms to unlock their unstructured data utilizing the most advanced machine learning and AI technology. Alkymi helps leading firms like Interactive Brokers, SimCorp, and Strategic Investment Group accelerate decision making by giving them the tools to understand, transform and leverage critical data found in emails and documents. For more information, visit http://www.alkymi.io.
Media ContactBethany Walsh[emailprotected]
SOURCE Alkymi
More:
Alkymi launches Alpha, a Generative AI-powered solution built for ... - PR Newswire