Page 1,028«..1020..1,0271,0281,0291,030..1,0401,050..»

What Is Unsupervised Machine Learning? – The Motley Fool

Artificial intelligence (AI) is an area that focuses on enabling machines and software to process information and make decisions autonomously. Machine learning, a component of AI, involves computer systems enhancing their problem-solving and comprehension of complex issues through automated techniques.

The three central machine-learning methodologies that programmers can use are supervised learning, unsupervised learning, and reinforcement learning. For in-depth information on supervised machine learning and reinforcement machine learning, kindly refer to the articles dedicated to them. Here you can read up on the basics of unsupervised machine learning.

Image source: Getty Images.

With unsupervised machine learning, a system is like a curious toddler exploring a world they know nothing about. The system is exploring data without knowing what it's looking for but is excited -- in a digital kind of way -- about any new pattern it stumbles upon.

With this type of machine learning, algorithms sift through heaps of unstructured data without any specific directions or end goals in mind. They are looking for previously unknown patterns, much as you might look for a new stock pick in an overlooked corner of the market. This is rarely the last step since the owner of the raw data typically applies more sophisticated deep learning or supervised machine learning analyses to any potentially interesting patterns.

Why should you care about this artificial intelligence toddler on a quest without a firm goal? Well, unsupervised machine learning is actually on the cutting edge of technology and innovation. Its a key player in everything from autonomous vehicles learning to navigate roads to recommendation algorithms on your favorite streaming platforms. This pattern-finding method is a powerful first step in a deep analysis of any complex topic, from weather forecasting to genetic research.

Two major types of unsupervised learning are clustering and association.

So now you know what unsupervised machine learning is and why it matters. How can you take this newfound knowledge and put it to good use?

First off, you can make informed investment decisions. Companies leveraging unsupervised learning are often poised for growth as this technology continues to evolve. Think about Amazon (AMZN -1.27%) using unsupervised learning for its product recommendations or Netflix (NFLX -2.99%) running unsupervised machine learning routines across years of collected viewership data to generate your streaming home page and make future content production decisions.

These applications aren't just fun toys -- they are business advantages and growth drivers.

Also, AI and machine learning continue to reshape many industries. Whether you're into FAANG stocks or emerging AI startups, knowledge about unsupervised learning can give you an edge in evaluating a company's tech prowess and potential for future success.

We all appreciate a bit of connection, right? Well, thanks to unsupervised machine learning, we're getting better at finding people we might know or like on social media platforms. Facebook is a prime example.

Have you ever wondered how Facebook seems to know who your actual friends from high school are -- the ones you may actually want to keep in touch with? It's not sorcery. It's unsupervised learning in action.

Meta Platforms' (META -0.29%) massive social network continually analyzes a trove of user data, looking for patterns and shared features among users. Common friends are a helpful clue; similar locations and shared interests can point the platform in the right direction, and mutual workplaces can be the clincher. None of these qualities is enough in itself to find that long-lost flame or forgotten friend, but they add up through the power of unsupervised machine-learning algorithms.

So when Facebook suggests "People You May Know," it essentially gives you the output of an unsupervised learning model. The social network isn't just pulling these suggestions out of a digital hat. Each one is the result of a complex analysis of patterns and connections.

View original post here:
What Is Unsupervised Machine Learning? - The Motley Fool

Read More..

Labelling cryptocurrency as ‘gambling’ shows lack of understanding and misses the solution, expert says – ABC News

When a UK parliamentary committee proposed last month that cryptocurrency be regulated as gambling, it didn't take long for the Treasury to reject the idea.

But the fact thatit was suggested at all is revealing, says Gavin Brown, associate professor in financial technology at the University of Liverpool.

"[The committee] didn't really understand the technology," he says.

And in this, they aren't alone despite cryptocurrencies, digital currency designed to offer an alternative payment method to traditional money, now being over a decade old.

"I see that all the time. I'll get a taxi in London and the taxi driver will know ten times more [about cryptocurrency] than the CEO of a multinational bank I'm about to visit," Brown says.

"We still see that disparity of knowledge, and not just from people on the street, but also from people who are actually making the policies who should know better."

That's because crypto is"powerful stuff",he says.

The largest ever Bitcoin transaction was for just over $US1 billion ($1.5 billion), which, to move without a bank, carried a transaction fee of $US3.56 ($5.35).

"And it cleared and settled in minutes," Brown says.

He argues that ignorance of cryptocurrency is risky.

"Western Anglo-Saxon economies are stuck between a rock and a hard place, because it's not going away and it's a constant threat."

There are thousands of different cryptocurrencies Bitcoin is the biggest and trying to regulate them is anything but simple.

Larger crypto companies are centralised, meaningthey are traditional companies with shareholders or a board of directors.

But the same is not true of cryptocurrencies, which are decentralised.

"The problem we have with things like Bitcoin, is that it's not really controllable or ban-able in a traditional sense because [it] doesn't have a CEO, a head office, any employees, an email address, doesn't file any accounts, doesn't have any buildings, has no AGM, has no shareholders," Brown says.

"Literally, Bitcoin is an idea. It's a computer program that's being run globally all over the world at the same time."

In some senses, that elusiveness is exactly the point.

"[Cryptocurrency] has been deliberately constructed in a way that is anti-state, and almost naturally beyond the reach of regulators," Brown says.

It's one of "a ton of downsides [associated with it], like nefarious use by criminals", he says.

John Reed Stark, a lawyer in Washington DC specialising in the intersection of law and technology, told ABC's Four Corners last year that "horrific crimes from ransomware attacks, and terrorism, and evading sanctions during war time drug dealing [and] sex trafficking" are crimes that are "now a lot easier to do because of cryptocurrency".

Natasha Gillezeau, SXSW Sydney production lead and former Australian Financial Reviewtech journalist, says"people need to understand how serious [cryptocurrency] is".

"We have to understand how much of a marketing and advertising push that crypto [companies have]done in the last few years," she tells ABC RN's Download This Show.

"We're talking sports stadiums [sponsored by] crypto.com, we're talking outreach to influencers We're actually in a different point in the cycle of how much the marketing and advertising industry has legitimised it.

"I've been in conversations with people who have said, 'We target people deliberately on Facebook and Instagram, that we know have gambling problems, with crypto ads because they're more likely to flip than others'."

While Gillezeau doesn't see the UK's gambling regulation proposal as the best solution to the problem, she believes it does recognise "the human effects of cryptocurrency".

"Probably what these British MPs [who raised the proposal] are speaking to is that there are certain segments of society that have been affected and blasted the last few years with crypto-specific advertising, they've lost a lot of money and this is a response," she says.

If crypto trading was designated as gambling, platforms could face additional licensing rules, requirements to protect vulnerable users, stake limits and closer control of advertising.

Brown can also appreciate some of the motivation to align cryptocurrency use with gambling regulations such as these.

"[Cryptocurrency]has the power to defraud, it has the power for people to lose significant amounts of wealth, it kind of feels a bit like gambling as well. And therefore, by taking that kind of ultra prudent label of gambling and just pinning it on it, it's quickand it plays to that downside risk agenda."

It also allows regulators to dip in to, and "just repurpose" ready-made law.

"But that misses a trick," Brown says.

"These new types of technology are not gambling, they're very different to gambling, actually. There is no house and punter. In fact, it's much more nuanced than that."

Here in Australia, in mid-2022around one million people owned cryptocurrency. In the UK, 5.2 million people or one in nine have either used or owned cryptocurrency.

"It's come that far in 13 years," Brown says.

"Go forward another 10 years. What happens if that number [in the UK] is 30 million or 40 million?

"What happens if every British person or every Australian person wakes up and says, 'I'm a bit sick of inflation, I'm sick of interest rates, I'm sick of my government or whoever controlling money in a certain way. I want a different type of money'.

"Well, guess what? There is this alternative type of money and all you need is an internet connection to access it."

The more a population uses alternative currency, the more difficult it becomes to control its economy, Brown says.

"If people aren't using that [traditional] currency, you're completely emasculated. That right hand of your two-handed approach is gone."

After presenting on cryptocurrencies to the UK Treasury six years ago, Brown was asked, "If people start using this [cryptocurrency], who pays for schools? Who pays for roads? Who pays for defence?"

"This is dangerous", the person said.

And Brown agrees.

"For so long cryptocurrencies and digital assets have been kept at arm's length our fingers in the ears, 'let's hope it'll go away, let's hope it'll disappear'.

"Nation states would like it to go away, but it's just not going away.

"The challenge we have, especially for countries like the UK and Australia, is because financial services are such an important part of the economy, we can't afford to get left behind."

Governments must have an effective digital strategy, he says. And while crypto itself might be extremely difficult to regulate, the same is not true of the people and companies who interact with it.

"If someone says, 'Hey, we're a cryptocurrency bank', well, guess what? I can regulate you as a bank of a digital asset.

"If someone says, 'I'm a prime broker', or 'I want to be a custodian of Bitcoin', or 'I want to be a financial adviser of digital assets', we can regulate those people because they are companies and individuals in a traditional sense.

"And that's a much more pragmatic thing to do."

This article contains content that is only available in the web version.

View original post here:
Labelling cryptocurrency as 'gambling' shows lack of understanding and misses the solution, expert says - ABC News

Read More..

Mentorship and machine learning: Graduating student Irene Fang is … – University of Toronto

Majoring in human biology and immunology, Irene Fang capitalized on opportunities inside and outside the classroom to research innovative methods in ultrasound detection driven by artificial intelligence and machine learning. Shes also working on research into cells and proteins in humans that could lead to new treatments and therapies for immunocompromised patients.

As she earned her honours bachelor of science degree, Fang always wanted to help others succeed. As a senior academic peer advisor with Trinity College, shes admired throughout the community for her brilliance, kindness and dedication to U of T.

I want to keep giving back because I am so appreciative of the upper-year mentors I connected with, starting in first year, says Fang. They continue to serve as an inspiration, motivating me to further develop professional and personal skills.

Why was U of T the right place for you to earn your undergraduate degree?

U of T provided a plethora of academic, research and experiential learning opportunities alongside a world-class faculty to help cultivate my curiosity and consolidate my knowledge. In conjunction with an unparalleled classroom experience, I gained a real-world perspective with international considerations through the Research Opportunities Program.

I would be remiss if I didnt also mention how extracurricular activities enhanced and enriched my university experience. The many clubs at U of T helped me focus on my passions and make meaningful connections with like-minded peers who became my support network, enabling me to reach my full potential.

How do you explain your studies to people outside your field?

Im interested in machine learning, which is an offshoot of artificial intelligence that teaches and trains machines to perform specific tasks and identify patterns through programming.

There are two types of machine learning. Supervised learning involves training your machine learning algorithm with labelled images. In unsupervised learning, your algorithm learns with unlabeled images; this is advantageous as it eliminates the need to look for expert annotators or sonographers to label the images, saving time and costs. My research project compared how well unsupervised learning was able to identify and classify the three distinct ultrasound scanning planes at the human knee with supervised learning, the current standard for machine learning in ultrasound images.

My research project in immunology seeks to explore how a particular protein or receptor expressed on a specific subpopulation of human memory B cells mediates their immune responses. This is significant as memory B cells generate and maintain immunological memory, eliciting a more rapid and robust immune response upon the re-exposure to the same foreign invader, such as a pathogen or toxin, enabling a more effective clearance of the infection.

How is your area of study going to improve the life of the average person?

It is absolutely fascinating that AI has already revolutionized the medical field. Specifically, AI possesses the potential to aid in the classification of ultrasound images, enhancing early detection and diagnosis of internal bleeding because of injuries or hemophilia. Overall, AI may lead to more efficient care for patients, thereby improving health outcomes.

In terms of my immunology research, since the memory B cells expressing the specific receptor are dysregulated in people suffering from some autoimmune disorders and infectious diseases, a better understanding of how memory B cells are regulated could provide valuable insight into the underlying mechanisms of such diseases so we can enable scientists to develop new therapies that alleviate patients symptoms.

What career or job will you pursue after graduation?

I aspire to pursue a career in the medical field, conduct more research and nurture my profound enthusiasm for science while interacting with a diverse group of people. I hope to devote my career to improving human health outcomes while engaging in knowledge translation to make science more accessible to everyone.

You spent time at U of T as an academic peer advisor. Why was this work so important to you and what made it so fulfilling?

I remember feeling overwhelmed as a first-year student until I reached out to my academic peer advisors. Had I not chatted with them, I would not have known about, let alone applied for, my first research program. Looking back, it opened the door to many more new, incredible possibilities and opportunities. This experience made me realize the significance and power of mentorship, inspiring me to become an academic peer advisor. Seeing my mentees thrive and achieve their goals has made this role so rewarding so much so that I am determined to engage in mentorship throughout my career after graduation.

What advice do you have for current and incoming students to get the most out of their U of T experience?

Ask all questions because there are no silly questions. Get involved, whether it be volunteering, partaking in work-study programs, sports or joining a club. Meeting new people and talking to strangers can be daunting, but the undergraduate career is a journey of exploration, learning and growth.

Be open-minded and dont be afraid to try something new. Immersing yourself in distinct fields enables you to discover your interests and passions, which can lead you to an unexpected but meaningful path.

Also, be kind to yourself because failures are a normal part of the learning process; whats important is that you take it as an opportunity to learn, grow and bolster your resilience. And finally, although academia and work can keep you busy, remember to allocate time for self-care. Exercise, sleep and pursue hobbies because mental health is integral for success in life.

See more here:
Mentorship and machine learning: Graduating student Irene Fang is ... - University of Toronto

Read More..

A reinforcement learning approach to airfoil shape optimization … – Nature.com

In the following section, we present the learning capabilities of the DRL agent with respect to optimizing an airfoil shape, trained in our custom RL environment. Different objectives for the DRL agent were tested, gathered into three tasks. In Task 1, the environment is initialized with a symmetric NACA0012 airfoil and successive tests were performed in which the agent must (i) maximize the lift-to-drag ratio L/D, (ii) maximize the lift coefficient Cl, (iii) maximize endurance (C^{3/2}_{l}/C_{d}), and (iv) minimize the drag coefficient Cd. In Task 2, the environment is initialized with a high performing airfoil having high lift-to-drag ratio and the agent must maximize this ratio. The goal is to test if the learning process is sensitive to the initial state of the environment and if higher performing airfoils can potentially be produced by the agent. In Task 3, the environment is initialized with this same higher performing airfoil, but has been flipped along the y axis. Under this scenario, we investigate the impact of initializing the environment with a poor performing airfoil on the agent and determine if the agent is able to modify the airfoil shape to recoup a high lift-to-drag ratio. Overall, these tasks demonstrate the learning capabilities of the DRL agent to meet specified aerodynamic objectives.

Since we are interested in evaluating the drag of the agent-produced airfoils, the viscous mode of Xfoil is used. In viscous flow conditions, Xfoil only requires the user to specify a Reynolds number (Re) and an airfoil angle of attack (alpha). In all tasks, the flow conditions specified in Xfoil were kept constant. A zero-degree angle of attack and Reynolds number equal to 106 were selected to define the design point for the flow conditions. The decision to keep the airfoils angle of attack at a fixed position is motivated by the interpretability of the agents policy. A less constrained problem, in which the agent can modify the angle of attack, would significantly increase the design space, leading to less interpretability of the agents actions. Additionally, the angle of attack is chosen to be fixed at zero in order to easily compare the performance of agent-generated shapes with those found in the literature. The Reynolds number was chosen to represent an airfoil shape optimization problem at speeds under the transonic flow regime15. Hence, given the relatively low Re number chosen, the flow is incompressible over the airfoil, although Xfoil does include some compressibility corrections when approaching transonic regimes (Karman-Tsien compressibility correction,43). All airfoils are thus compared at zero angle of attack.

Two parameters relating to the PPO algorithm in Stable Baselines can be set, namely the discount factor (gamma) and the learning rate. The discount factor impacts how important future rewards are to the current state: (gamma = 0) will favor short-term reward whereas (gamma = 1) aims at maximizing the cumulative reward in the long run. The learning rate controls the amount of change brought to the model: it is a hyperparameter tuning the PPO neural network. For the PPO agent, the learning rate must be withing ([5times 10^{-6}, 0.003]). A study of the effects of the discount factor and learning rate on the learning process was conducted. This study shows that optimal results are found when using a discount factor (gamma = 0.99) and learning rate equal to 0.00025.

In building our custom environment, we have set some parameters to limit the generation of unrealistic shapes by the agent. These parameters help take into account structural considerations as well as limit the size of the action space. For instance, we define limits to the thickness of the produced shape. If the generated shape (resulting from the splines represented by the control points) exhibits a thickness over or under a specified limit value, the agent will receive a poor reward. Regarding the action space, we set bounds for the change in thickness and camber. This allows the agent to search in a restricted action space thus eliminating a great number of unconverged shapes resulting from actions bringing changes to the airfoil shape that are too extreme. These parameters are given in Table2. Moreover, the iterations parameter is the number of times Xfoil is allowed to rerun a calculation for a given airfoil in the event the solver does not converge. Having a high iterations number increases the convergence rate of Xfoil but also increases run times.

The environment is initialized with a symmetric airfoil having (L/D = 0), (C_{l} = 0) and (C_{d} = 0.0054) at (alpha = 0) and (Re = 10^{6}). In a first experiment, the agent is tasked with producing the highest lift-to-drag airfoil, starting from the symmetric airfoil. During each experiment, the agent is trained over a total number of iterations (defined as the total timestep parameter), which are broken down into episodes having a given length (defined as the episode length parameter). The DRL agent is updated (i.e., changes are brought to the neural network parameters) every N steps. At the end of an experiment, several results are produced. Figure7a displays the L/D of the airfoil successively modified by the agent at the end of each episode.

Learning curves for max L/D objective starting with a symmetric airfoil.

In Fig.7a, each dot represents the L/D value of the shape at the end of an episode and the blue line represents the L/D running average value over 40 successive episodes. The maximum L/D obtained over all episodes is also displayed. Settings regarding the total number of iterations, episode length and N steps for the experiment are given above the graph. It can be observed from Fig.7a that starting with a low L/D during early episodes, the L/D at the end of an episode increases with the number of episodes. Though significant variance in the L/D at the end of an episode can be seen, with values ranging between (L/D = -30) and (L/D = 138), the average value however increases and stabilizes around (L/D = 100). This increase in L/D suggests that the agent in able to learn the appropriate modifications to bring to the symmetric airfoil resulting in an airfoil having high lift-to-drag ratio. We are also interested in tracking a score over a whole episode. Here, we arbitrarily define this score as the sum of the L/D of each shape produced during an episode. For instance, if an episode is comprised of 20 iterations, the agent will have the opportunity to modify the shape 20 times thus resulting in 20 L/D values. Summing these values corresponds to the score over one episode. If the agent produces a shape that does not converge in the aerodynamic solver, a value of 0 is added to the score, thus penalizing the score over the episode if the agent produces highly unrealistic shapes. The evolution of the score with the number of episodes played is displayed in Fig.7b.

Figure7b shows the significant increase in the average score at end of episode, signaling that the agent is learning the optimal shape modifications. We can then visualize the best produced shape over the training phase in Fig.8.

Agent-produced airfoil shape having highest L/D over training.

In Fig.8, the red dots are the control points accessible to the agent. The blue curve describing the shape is the spline resulting from these control points. It is interesting to observe that the optimal shape produced shares the characteristics of high lift-to-drag ratio airfoils, such as those found on gliders, having high camber and a drooped trailing edge. Finally, we run the trained agent on the environment over one episode and observe the generated shapes in Fig.9. Starting from the symmetric airfoil, we can notice the clear set of actions taken by the agent to modify the shape to increase L/D. The experiment detailed above was repeated by varying total timesteps, episode lengths and N steps.

Trained agent modifies shape to produce high L/D.

We then proceed to train the agent under different objectives: maximize Cl, maximize endurance and minimize Cd. Associated learning curves and modified shapes can be found in Figures10, 11, 12, 13.

Learning curves for max Cl objective starting with a symmetric airfoil.

Learning curves for max (C^{3/2}_{l}/C_{d}) objective starting with a symmetric airfoil.

For the minimization of Cd objective, the environment is initialized with a symmetric airfoil having Cd = 0.0341. This change in initial airfoil, compared to the previously used NACA0012 is justified by enhanced learning visualizations.

Learning curves for min Cd objective starting with a symmetric airfoil.

Trained agent modifies shape to produce low Cd starting with a low-performance airfoil.

Similarly, the results show a clear learning curve during which both the metric of interest and the score at end of episode increase with the number of episodes. The learning process appears to happen within the first 100 episodes as signaled by the rapid increase in the score and then plateaus, oscillating around an average score value.

A second set of experiments was performed to assess the impact of the initial shape. The environment is initialized with a high performing airfoil (i.e., having a relatively high lift-to-drag ratio) and the agent is tasked with bringing further improvement to this airfoil. We chose this airfoil by investigating the UIUC database41 and selected the airfoil having the highest L/D. This corresponds to the Eppler 58 airfoil (e58-il) having (L/D = 160) at (alpha = 0) and (Re = 10^{6}), displayed in Fig.14. Results for this experiments are displayed in Fig.15.

Eppler 58 high lift-to-drag ratio airfoil.

Learning curves for max L/D objective starting with a high L/D airfoil.

It is interesting to compare the learning curves and average scores achieved when starting with the symmetric airfoil and the high performance airfoil.

In Fig.16, we can observe that for both initial situations there is an increase in the average score during early episodes followed by stagnation, demonstrating the learning capabilities of the agent. However, the plateaued average score reached is significantly higher when the environment is initialized with the high performance airfoil, given that the environment is initialized in an already high-reward region (through the high-performance airfoil). Additionally, it was observed that a slightly higher maximum L/D value could be achieved when starting with the high lift-to-drag ratio airfoil. Overall, Task 1 and Task 2 emphasize the robustness of the RL agent to successfully converge on high L/D airfoils, regardless of the initial shapes (in both experiments, the agent converges on airfoils having (L/D > 160)). The agent-generated airfoil for Task 2 is represented in Fig.21a.

Initial airfoil impact on the learning curve.

For Task 3, the starting airfoil is a version of the Eppler 58 airfoil that has been flipped around the y axis. As such, the starting airfoil has a lift-to-drag ratio opposite of the Eppler 58 (i.e., (L/D = -160)), thus exhibits low aerodynamic performance. The goal for this task is for the agent to modify the shape into a high performing airfoil, having high L/D.

In Fig.17, we display the learning curves associated to the score and L/D value at the end of each episode when the environment is initialized with the flipped e58 airfoil at the beginning of each episode. A noticeable increase in both the score and L/D values between episode 30 and episode 75 can be observed, followed by a plateau region. This demonstrates that the agent is able to learn the optimal policy to transform the poor performing airfoil into a high performing airfoil by bringing adequate changes to the airfoil shape. The agent then applies this learned optimal policy after episode 100. Moreover, the agent is capable of producing airfoils having lift-to-drag ratios equivalent or higher than the Eppler e58 high-performance airfoil, signaling that the initial airfoil observed by the agent does not impact the optimal policy learned by the agent, but rather only delays its discovery (see Figs.15 and 17).

Score and L/D learning curves when starting with a low performance airfoil.

An example of a high L/D shape produced by the DRL agent when starting with the flipped e58 airfoil is displayed in Fig.18. It is interesting to notice that in this situation, the produced airfoil shares previously observed geometric characteristics, such as high camber and a drooped trailing edge, leading to a high L/D value. The trained agent is then run over one episode length in Fig.19. By successively modifying the airfoil shape, we can observe that the agent is able to recover positive L/D values having started with a low performance airfoil. This demonstrate the correctness of the behavior learned by the agent.

Agent-produced airfoil shape when starting with low performance airfoil.

Trained agent modifies shape to produce high L/D starting with a low-performance airfoil.

Finally, the best produced shapes (i.e., those maximizing the metric of interest) for the different objectives and tasks can now be compared, as illustrated in Figs.20 and21.

Best performing agent-produced shapes under different objectives and a symmetric initial airfoil.

Best performing agent-produced shapes under different objectives and an asymmetric initial airfoil.

The results presented above demonstrate that the number of function evaluations (i.e., the number of times Xfoil is run and converges on a new shape proposed by the agent) depends on the task at hand. For instance, around 2,000 function evaluations were needed in Task 2, while 4,000 are needed in Task 1 and around 20,000 were required in Task 3. These differences can be explained by the distance that exists between the starting shape and the optimal shape. In other terms, when starting with the low performing airfoil, the agent has to perform a greater number of successful steps to converge on an optimal shape, whereas when starting with an already high-performance airfoil, the agent is close to an optimal shape and requires fewer Xfoil evaluations to converge on an optimal shape. The number of episodes needed to reach an optimal policy, however, appears to be between 100 and 200 episodes across all tasks. Overall, when averaging across all tasks performed in this research, approximately 10,000 function evaluations were needed for the agent to converge on the optimal policy.

Having trained the RL agent on a given aerodynamic task, the designer can then draw physical insight by observing the actions the agent follows to optimize the airfoil shape. From the results presented in this research, it can be observed that high camber around the leading edge and low thickness around the trailing edge are preferred shapes to maximize L/D, given the flow conditions used here. Observing the various policies corresponding to different aerodynamic tasks, the designer can then make tradeoffs between the different aerodynamic metrics to optimize. Multi-point optimization can be achieved by including in the reward multiple aerodynamic objectives. For example, if the designer seeks to optimize both L/D and Cl, a new definition of the reward could be: (r = (L/D_{current} + Cl_{current})-(L/D_{previous} + Cl_{previous})) (after having normalized L/D and Cl). However, multi-point optimization will decrease interpretability of the agents actions. By introducing multiple objectives in the agents reward, it will become more difficult for the designer to draw insight from shape changes and link those changes to maximizing a specific aerodynamic objective.

The proposed methodology enables to reduce computational costs by leveraging a data-driven approach. Having learned an optimal policy for a given aerodynamic objective, the agent can be used to optimize new shapes, without having to restart the whole optimization process. More specifically, this approach can be used to alleviate the computational burden of problems requiring high-fidelity solvers (when RANS or compressibility are required). For these problems, the DRL agent can quickly find a first optimal solution, using a low-fidelity solver. The solution can then be refined using a higher-fidelity solver and a traditional optimizer. In other words, DRL is used in this context to extract prior experience to speed up the high-fidelity optimization. As such, our approach can speed up the airfoil optimization process by very rapidly offering an initial optimal solution. Similarly to8, our approach can also be used directly for high-fidelity models. To accelerate convergence speeds, the DRL agent is first trained using a low-fidelity solver in order to rapidly learn an optimal policy. The agent is then deployed using a high-fidelity solver. In doing so this approach (i) reduces computational cost by shifting from a low to a high-fidelity solver to speed up the learning process, (ii) is data-efficient as the policy learned by the agent can then be followed for any comparable problem and, (iii) bears some generative capabilities as it does not require any user-provided data.

As reinforcement learning does not rely on any provided database, no preconception of what a good airfoil shape should look like is available to the agent. This results in added design freedom leading the agent to occasionally generate airfoil shapes that can be viewed as unusual to the aerodynamicists eye. In Fig.22, we compare agent-produced shapes to existing airfoils in literature. The focus is not on the agents ability to produce a specific shape for given flow conditions and aerodynamic targets, but rather to illustrate the geometric similarities found on both existing airfoils and artificially-generated shapes. A strong resemblance between the agent-generated and existing airfoils can be observed. This highlights the rationality of the policy learned by the agent: having no preexisting knowledge on fluid mechanics or airfoils, an intelligent agent trained in the presented custom RL environment can generate realistic airfoil shapes.

We compare five existing airfoils to our agent-produced shapes in Fig.22. In Fig.22a and b, we compare the agent-produced shape to Whitcombs supercritical airfoil. The shared flat upper surface, cambered rear and blunt trailing edge can be noticed51. We then compare agent-generated shapes to existing high-lift airfoils. Here also, the geometric resemblance is noticeable, notably the shared high camber.

Airfoil shape comparison between agent-produced shapes and existing airfoils.

Detrimental effects of large episode lengths.

One observation was made when noticing drastic decreases in the average score at the end of episode after a first period of increase. We believe this can be explained by the fact that when the episode length is large, once the agent has learned a policy allowing to quickly (under relatively few iterations) attain high L/D values, the average score will then decrease because the agent reaches the optimal shape before the end of the episode. Within the remaining iterations before the episode ends, the agent continues to modify the shape hoping for higher performance, but reaches a limit where the shape is too extreme for the aerodynamic solver to converge, resulting in a poor reward. This would explain why we can observe on Fig.23 a rapid increase in the score between 0 and 25 episodes, during which the agent explores various shapes and estimates an optimal policy, and a strong decrease in the score following this peak during which the agent follows the determined optimal policy and reaches optimal shapes before the episode ends.

The results presented above demonstrate the ability of a DRL agent to learn how to optimize airfoil shapes, provided a custom RL environment to interact with. We now compare this approach to a classical simplex method, under the same possible action conditions: starting from a symmetric airfoil, the optimizer must successively modify the shape by changing thickness and camber at selected x positions to achieve the highest performing airfoil in terms of L/D.

Here, the optimizer is based on the Nelder-Mead simplex algorithm, capable of finding the minimum of a multivariate function without having to calculate the first or second derivatives52. In this case, the function maps a 3-set of actions, being [select x position, change thickness, change camber] to a -L/D value. More specifically, taking the 3-set of actions as inputs, the function modifies the airfoil accordingly, evaluates the modified airfoil in Xfoil and outputs the associated -L/D. As the optimizer tries to minimize the- -L/D value, it searches for the 3-set that will maximize L/D. Once the optimizer finds the optimal 3-set of actions, the airfoil shape is modified accordingly and the optimizer is rerun on this new modified shape. This defines what we call one optimization cycle. Hence, the optimizer is tasked with the exact same optimization problem as the DRL agent: optimizing the airfoil shape to reach the highest L/D value possible by successively modifying the shape. During each optimization cycle, the optimizer evaluates the function a certain number of times. In Fig.24, we monitor the increase in L/D with the number of function evaluations.

Simplex method approachL/D increase with function evaluations for different starting points.

In the three situations displayed, it can be observed that the value of L/D increases with the number of function evaluations. However, the converged L/D value is significantly lower than values obtained through the DRL approach. For instance, even after 500 optimization cycles (i.e., 500 shape modifications and over 30,000 function evaluations), the optimizer is unable to generate an airfoil having L/D over 70. We know that this value of L/D is not a global optimum, as an L/D of at least 160 can be reached with the Eppler 58 airfoil from the UIUC database41. Thus, it seems that the simplex algorithm has converged on a local minimum. Furthermore, as demonstrated in Fig.24a and c, the converged L/D value found by the optimizer is highly dependent on the initial point. The airfoil shapes generated using the simplex method can be found in Fig.25.

Gradient-free approach generated airfoil shapes.

In Table3, we compare the converged L/D values, number of iterations and run times of the simplex method and DRL approach. In both approaches, the agent or optimizer can modify the airfoil 60 times. Although the number of iterations and run time are lower for the simplex method, the converged L/D value is far lower compared to the DRL approach.

This rapid simplex approach to the airfoil shape optimization problem highlights the benefits and capabilities of the presented DRL approach. First, the DRL approach seems less prone to convergence on local minima, as very high values of L/D can be achieved. Second, once the DRL agent has learned the optimal policy during a training period, it can be applied directly to any new situation whereas the simplex approach will require a whole optimization process for each new scenario encountered.

See the article here:
A reinforcement learning approach to airfoil shape optimization ... - Nature.com

Read More..

Cryptocurrency Quant’s Price Increased More Than 8% Within 24 hours – Benzinga

June 16, 2023 11:00 AM | 1 min read

Over the past 24 hours, Quant's (CRYPTO: QNT) price has risen 8.19% to $107.06. This is contrary to its negative trend over the past week where it has experienced a 3.0% loss, moving from $108.82 to its current price. As it stands right now, the coin's all-time high is $427.42.

The chart below compares the price movement and volatility for Quant over the past 24 hours (left) to its price movement over the past week (right). The gray bands are Bollinger Bands, measuring the volatility for both the daily and weekly price movements. The wider the bands are, or the larger the gray area is at any given moment, the larger the volatility.

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

The trading volume for the coin has risen 117.0% over the past week diverging from the circulating supply of the coin, which has decreased 0.21%. This brings the circulating supply to 14.54 million, which makes up an estimated 99.53% of its max supply of 14.61 million. According to our data, the current market cap ranking for QNT is #32 at $1.55 billion.

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

Powered by CoinGecko API

This article was generated by Benzinga's automated content engine and reviewed by an editor.

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

See the rest here:
Cryptocurrency Quant's Price Increased More Than 8% Within 24 hours - Benzinga

Read More..

Breaking the 21-Day Myth: Machine Learning Unlocks the Secrets of … – SciTechDaily

A machine learning-based study by Caltech reveals that habit formation varies greatly, with gym habits taking six months to establish on average, while healthcare workers form a hand-washing habit in a few weeks. The study emphasized the power of machine learning in researching human behavior outside lab conditions.

New machine learning study finds different habits take varying amounts of time to take root.

Putting on your workout clothes and getting to the gym can feel like a slog at first. Eventually, you might get in the habit of going to the gym and readily pop over to your Zumba class or for a run on the treadmill. A new study from social scientists at Caltech now shows how long it takes to form the gym habit: an average of about six months.

The same study also looked at how long it takes healthcare workers to get in the habit of washing their hands: an average of a few weeks.

There is no magic number for habit formation, says Anastasia Buyalskaya (PhD 21), now an assistant professor of marketing atHEC Paris. Other authors of the study, which appears in the journalProceedings of the National Academy of Sciences,include CaltechsColin Camerer, Robert Kirby Professor of Behavioral Economics and director and leadership chair of the T&C Chen Center for Social and Decision Neuroscience, and researchers from the University of Chicago and the University of Pennsylvania. Xiaomin Li (MS 17, PhD 21), formerly a graduate student and postdoctoral scholar at Caltech, is also an author.

You may have heard that it takes about 21 days to form a habit, but that estimate was not based on any science, Camerer says. Our works supports the idea that the speed of habit formation differs according to the behavior in question and a variety of other factors.

The study is the first to use machine learning tools to study habit formation. The researchers employed machine learning to analyze large data sets of tens of thousands of people who were either swiping their badges to enter their gym or washing their hands during hospital shifts. For the gym research, the researchers partnered with 24 Hour Fitness, and for the hand-washing research, they partnered with a company that used radio frequency identification (RFID) technology to monitor hand-washing in hospitals. The data sets tracked more than 30,000 gymgoers over four years and more than 3,000 hospital workers over nearly 100 shifts.

With machine learning, we can observe hundreds of context variables that may be predictive of behavioral execution, explains Buyalskaya. You dont necessarily have to start with a hypothesis about a specific variable, as the machine learning does the work for us to find the relevant ones.

Machine learning also let the researchers study people over time in their natural environments; most previous studies were limited to participants filling out surveys.

The study found that certain variables had no effect on gym habit formation, such as time of day. Other factors, such as ones past behavior, did come into play. For instance, for 76 percent of gymgoers, the amount of time that had passed since a previous gym visit was an important predicator of whether the person would go again. In other words, the longer it had been since a gymgoer last went to the gym, the less likely they were to make a habit of it. Sixty-nine percent of the gymgoers were more likely to go to the gym on the same days of the week, with Monday and Tuesday being the most well-attended.

For the hand-washing part of the study, the researchers looked at data from healthcare workers who were given new requirements to wear RFID badges that recorded their hand-washing activity. It is possible that some health workers already had the habit prior to us observing them, however, we treat the introduction of the RFID technology as a shock and assume that they may need to rebuild their habit from the moment they use the technology, Buyalskaya says.

Overall, we are seeing that machine learning is a powerful tool to study human habits outside the lab, Buyalskaya says.

Reference: What can machine learning teach us about habit formation? Evidence from exercise and hygiene by Anastasia Buyalskaya, Hung Ho, Katherine L. Milkman, Xiaomin Li, Angela L. Duckworth and Colin Camerer, 17 April 2023, Proceedings of the National Academy of Sciences.DOI: 10.1073/pnas.2216115120

The study was funded by the Behavior Change for Good Initiative, the Ronald and Maxine Linde Institute of Economics and Management Sciencesat Caltech, and theTianqiao and Chrissy Chen Institute for Neuroscienceat Caltech.

Read the rest here:
Breaking the 21-Day Myth: Machine Learning Unlocks the Secrets of ... - SciTechDaily

Read More..

Bitcoin’s New Frontier: Citizenship Investment in the Cryptocurrency … – usatales.com

Are you curious about the latest trend in the cryptocurrency world? The most popular cryptocurrency has opened up a new frontier for investors: citizenship investment. You can now use Bitcoin to obtain citizenship in certain countries.

The rise of Bitcoin has led to an increase in the number of countries that accept it as a form of payment for citizenship investment. This significant development reflects the growing importance of cryptocurrencies in the global economy. Investing in citizenship through Bitcoin may seem like an unconventional way to obtain a second passport, but it has become a popular option for many investors.

This article will explore the rise of Bitcoin and the impact it has had on citizenship investment programs around the world. Read this article to learn more about citizenship investment with Bitcoin and other cryptocurrencies.

In recent years, there has been a growing trend of individuals seeking citizenship in foreign countries through investment. With the rise of Bitcoin and other cryptocurrencies, many countries have started to accept digital currencies as a form of payment for their citizenship through investment programs. In this section, we will explore citizenship investment, why Bitcoin is an attractive investment option and the pros and cons of investing in Bitcoin for citizenship.

Citizenship investment is also known as economic citizenship or citizenship by investment, which is a process where individuals can obtain citizenship in a foreign country by investing money in the countrys economy. This type of investment can come in real estate, government bonds, or other investments. Many countries worldwide offer Citizenship investment programs, including Vanuatu, Malta, St. Kitts and Nevis.

Bitcoin has become an increasingly popular investment option for citizenship investment due to its decentralized nature and the potential for high returns. The ones searching for a secured investment, for their convenience Bitcoin is not connected to any government or financial institution. Bitcoin has grown immensely in recent years, making it an attractive investment option for profit-seeking people.

Investment in Bitcoin Like any investment, there are pros and cons to investing in Bitcoin for citizenship. Here are some of the most important points to consider:

Pros:

Cons:

If youre considering investing in Bitcoin, there are several factors you should consider before making a decision. Let us explore some of the key factors that you should keep in mind before investing in Bitcoin.

You have to understand the strategies of the market properly before investing in Bitcoin. Bitcoin is a highly volatile asset, and its value can fluctuate rapidly. Its essential to keep up to date with the latest news and trends in the market and to understand the underlying technology and the factors that can affect its value.

Investing in Bitcoin is only for some. Its a highly speculative asset with a significant risk of loss. Before investing, you should assess your risk tolerance and determine whether youre comfortable with the potential risks involved. It is very necessary to remember that you should never invest more than you can afford to lose.

You can use several different investment strategies when investing in Bitcoin. Some people prefer to buy and keep the coins, while others prefer to trade more frequently. Choosing a strategy that suits your investment goals is important, and risk tolerance is essential. You must always examine the fees and costs associated with each strategy and the tax implications.

Several Caribbean countries have actively promoted citizenship through investment programs (CIPs) to attract foreign investors. St. Kitts and Nevis, Antigua and Barbuda, and Dominica have been considered the most crypto-friendly for citizenship investment. These countries have been accepting Bitcoin and other cryptocurrencies as payment for citizenship applications since 2018.

Cryptocurrency management varies widely from country to country. Some countries have embraced cryptocurrency, while others have banned it outright. Countries accepting cryptocurrency regulations tend to focus on anti-money laundering (AML) and know-your-customer (KYC) requirements. Countries that have been more supportive of cryptocurrency include Malta, Switzerland, and Japan.

There are numerous advantages of citizenship in the crypto era. One of the primary advantages is that it allows investors to diversify their portfolios and protect their assets against political and economic instability. In addition, citizenship by investment programs allows investors to obtain a second passport, which can provide greater mobility and access to new markets.

The demand for cryptocurrency in the market has had a significant impact on investment opportunities. With the advent of cryptocurrency, investors now have access to a new asset class that was previously unavailable. This has created new investment opportunities, particularly for those interested in emerging technologies.

According to the status of 2022, the total market resources of all cryptocurrencies is over $2 trillion. Bitcoin remains the most popular cryptocurrency, with a market share of over 40%. Apart from Bitcoin, other popular cryptocurrencies exist, such as Ethereum, Binance Coin, and Cardano.

The impact of cryptocurrency on traditional investment markets is still being studied. Some experts believe that cryptocurrency has the potential to disrupt traditional investment markets, while others believe that it will simply complement existing investment options.

In conclusion, Bitcoin has opened up a new frontier for citizenship investment in the cryptocurrency era. With the rise of crypto-friendly countries and their investment programs, investors can use their digital assets to acquire citizenship or residency in a foreign country. Investing in citizenship or residency programs can provide several benefits. It allows investors to diversify their portfolios and protect their assets from political instability or economic downturns in their home country.

Find your best sports and Outdoor sporting accessories and gear.

Auto Amazon Links: No products found.

Find your best sports and Outdoor sporting accessories and gear.

More:
Bitcoin's New Frontier: Citizenship Investment in the Cryptocurrency ... - usatales.com

Read More..

U.S. House panel to vote on cryptocurrency bill in coming weeks: lawmaker – Yahoo Finance

By Pete Schroeder

WASHINGTON, June 13 (Reuters) - A key House Republican lawmaker said Tuesday that he intends to hold a committee vote on a comprehensive bill to establish a regulatory framework for cryptocurrency products in the coming weeks.

Representative Patrick McHenry, chairman of the House Financial Services Committee, said he expects to put a bill forward for the panel to consider after lawmakers return to work on July 11.

"I intend for this committee to mark up some form of this legislation when we return from the July 4 recess," he said at a hearing Tuesday.

McHenry has been leading an effort by some Republicans in Congress to pass a bill establishing clear rules for the crypto industry. A discussion draft put forward earlier this month by McHenry and others would clarify responsibilities for overseeing crypto products by regulators, and would give a pathway for crypto companies and exchanges to register with those agencies.

Crypto firms have been clamoring for such clarity from Congress, particularly as the Securities and Exchange Commission has taken a harder line, arguing most major crypto products are securities that must be registered and suing major exchanges.

But the prospects for the draft measure remain unclear. Democrats on the panel say they are considering the measure but have concerns. Representative Maxine Waters, the top Democrat on the committee, said Tuesday she worried that allowing crypto exchanges to receive provisional registration could enable bad actors.

And in the Senate, which must also pass any crypto legislation, key lawmakers like Senators Sherrod Brown and Elizabeth Warren have expressed even more skepticism about crypto products. (Reporting by Pete Schroeder)

Excerpt from:
U.S. House panel to vote on cryptocurrency bill in coming weeks: lawmaker - Yahoo Finance

Read More..

Zero-Shot Learning Demystified: Unveiling the Future of AI in … – YourStory

Machine learning has made significant strides in recent years, demonstrating remarkable capabilities in various domains such as image recognition, natural language processing, and recommendation systems. However, a fundamental limitation of traditional machine learning approaches is their reliance on labeled training data. This requirement poses a challenge when confronted with new, unseen classes or categories. Zero-Shot Learning (ZSL) emerges as a powerful technique that addresses this limitation, enabling machines to learn and generalise from previously unseen data with astonishing accuracy.

Zero-Shot Learning is an approach within machine learning that enables models to recognise and classify new instances without explicit training on those specific instances. In other words, it empowers machines to understand and identify objects or concepts they have never encountered before. Traditional machine learning models heavily rely on labeled training data, where each class or category is explicitly defined and represented. However, in real-world scenarios, it is impractical and time-consuming to label every possible class.

ZSL leverages the power of semantic relationships and attribute-based representations to bridge the gap between seen and unseen classes. Instead of relying solely on labeled training examples, ZSL incorporates additional information such as textual descriptions, attributes, or class hierarchies to learn a more generalised representation of the data. This allows the model to make accurate predictions even for novel or previously unseen classes.

Zero-Shot Learning operates on the premise of transferring knowledge learned from seen classes to unseen ones. The process typically involves the following steps:

Dataset Preparation: A dataset is created, containing labeled examples of seen classes and auxiliary information describing the unseen classes. This auxiliary information could be textual descriptions, attribute vectors, or semantic embeddings.

Feature Extraction: The model extracts meaningful features from the labeled data, learning to associate visual or textual representations with class labels. This step is crucial in building a robust and discriminative representation of the data.

Semantic Embedding: The auxiliary information for unseen classes is mapped into a common semantic space. This step enables the model to compare and relate the features of seen and unseen classes, even without explicit training examples.

Knowledge Transfer: The model leverages the learned features and semantic relationships to make predictions on unseen classes. By understanding the shared attributes or semantic characteristics, the model can generalise its knowledge to recognise and classify previously unseen instances accurately.

Zero-Shot Learning offers several advantages and opens up new possibilities in the field of machine learning:

Scalability: ZSL eliminates the need for retraining models every time a new class is introduced. This makes the learning process more efficient and scalable, as the model can seamlessly adapt to novel categories without requiring additional labeled examples.

Flexibility: ZSL allows for the incorporation of diverse sources of information, such as textual descriptions or attribute vectors, enabling models to generalise across different modalities. This flexibility broadens the applicability of machine learning in domains where explicit training data may be scarce or costly to obtain.

Real-World Relevance: In many real-world scenarios, new classes continuously emerge or evolve. Zero-Shot Learning equips models with the ability to adapt and recognise novel instances, making them more applicable in dynamic environments where traditional models would struggle.

Transfer Learning: ZSL leverages the knowledge gained from seen classes to make predictions on unseen classes. This ability to transfer knowledge opens up possibilities for transferring models trained on one domain to another related domain, even if the new domain lacks labeled examples.

The applications of Zero-Shot Learning are far-reaching and have the potential to transform various industries. Some notable applications include:

Object recognition and image classification in domains where new classes emerge frequently, such as wildlife conservation or fashion industry.

Natural language processing tasks like text categorisation or sentiment analysis, where new topics or categories continuously emerge.

Recommendation systems, where ZSL can enable personalised recommendations for previously unseen items or niche categories.

While Zero-Shot Learning has shown remarkable promise, there are still challenges that researchers and practitioners aim to address. Some of the key areas of focus include:

Semantic Gap: Bridging the semantic gap between seen and unseen classes remains a challenge. Developing more accurate and robust methods for mapping semantic information to feature representations is essential for improving ZSL performance.

Fine-Grained Learning: Zero-Shot Learning is particularly challenging in fine-grained domains where subtle differences exist between similar classes. Developing techniques that can capture and discriminate these fine-grained details is an ongoing research area.

Data Bias: Ensuring the fairness and generalisation of Zero-Shot Learning models is crucial. Models must be designed to handle data biases and prevent biased predictions when dealing with unseen classes.

As research continues in these areas, Zero-Shot Learning will likely continue to evolve, pushing the boundaries of machine learning and enabling machines to learn and generalise from previously unseen data in even more sophisticated ways.

Zero-Shot Learning represents a significant advancement in the field of machine learning by overcoming the limitations of traditional approaches. By leveraging auxiliary information and semantic relationships, ZSL enables machines to recognize and classify novel classes accurately, without the need for explicit training examples. With its scalability, flexibility, and real-world relevance, Zero-Shot Learning opens up new opportunities for applications in various domains. As research progresses and the challenges are addressed, ZSL is set to revolutionise the way machines learn and adapt, paving the way for more intelligent and capable systems

See more here:
Zero-Shot Learning Demystified: Unveiling the Future of AI in ... - YourStory

Read More..

Cryptocurrency exchange Binance leaves the Netherlands after … – NL Times

Binance, one of the world's largest cryptocurrency exchange platforms, will no longer be available for trading activities to owners of digital currencies in the Netherlands starting next month. De Nederlandsche Bank (DNB) did not grant Binance a license to operate in the country. New users from the Netherlands can no longer register on the platform, and after July 17, users will only be able to withdraw assets from their accounts.

Last year, Binance was already fined over 3.3 million euros by DNB because due to operating without a legally required registration with the DNB. The central bank pointed out that Binance had a very large number of customers in the Netherlands.

The million-euro fine imposed on Binance spanned from May 2020, when the registration requirement was introduced, until at least December 2021. DNB, citing legal considerations, refrained from disclosing whether another fine was pending or the reasons behind Binance's non-compliance. "In general, you can impose a fine again in such a case, a spokesperson said.

Registration, which some other dozens of crypto providers in the Netherlands have, is crucial notably for combating money laundering and terrorist financing.

The exchange stated that existing Dutch users will be notified via email with detailed information regarding the impact on their accounts and current assets. Binance advised users to withdraw all their assets from their accounts. While expressing disappointment over the situation, the company said it will maintain a productive and transparent relationship with Dutch regulators.

Binance remarked that it has acquired licenses in other European Union countries, such as France and Spain. However, the platform has been banned in the United States since 2019, leading to the establishment of Binance.US as a subsidiary to ensure compliance with regulations. Despite this, Binance.US has also faced bans in six states. Earlier this month, the company and its founder Changpeng Zhao came under scrutiny from the Securities and Exchange Commission (SEC). The American financial regulator questioned the true independence of Binance.US from its parent company.

Original post:
Cryptocurrency exchange Binance leaves the Netherlands after ... - NL Times

Read More..