Why Fashion Needs More Imagination When It Comes To Using Artificial Intelligence – Forbes

Virtual Fashion Show created using 3D digital design and AI machine learning algorithms

Until now, the use of artificial intelligence (AI) in the fashion industry has focused mostly on streamlining processes and increasing sales conversion. Areas which have traditionally taken precedence have been: finding efficiencies through automation, detecting product defects and counterfeit goods with image recognition and increasing sales conversion through personalised styling. Creative uses of AI have been underexplored, but pose a mammoth opportunity for an industry rapidly digitising its design and presentation methods during the pandemic and most likely afterwards too. Why is creative AI so underutilised in and what are the nascent opportunities for designers and brands? Is the use of AI in fashion design and presentations inevitable?

Matthew Drinkwater, Head of the Fashion Innovation Agency at London College of Fashion believes that: Initial uses of Artificial Intelligence have focused on quantifiable business needs, which has allowed for start-ups to offer a service to brands. He contests that: Creativity is much more difficult to quantify and therefore more likely to follow behind.

In a practical sense, perhaps an additional limitation has been the gulf between the skillsets of fashion designers and computer scientists. London College of Fashion seems to think so, having recently launched an 8 week AI course for 20 volunteer fashion students to learn Python to write code to gather fashion data, then use it to develop creative fashion solutions and experiences. When asked about the potential of AI in fashion, Drinkwater said: For me, it is in the unpredictability of an algorithm. He acknowledged the creative talent of designers but suggested that the collaboration between creative and neural networks may be where the unexpected is delivered. Its here that he predicts an imperfect result that challenges our perception of what fashion design or showcasing could or should be could arise.

The AI course was developed by the Fashion Innovation Agency (FIA) in partnership with Dr Pinar Yanardag of MIT Media Lab. Working on the course was FIAs 3D Designer, Costas Kazantzis, who also designed 3D environments for one of the course outputsan AI-driven catwalk. He explained during a Zoom call that the students hadnt coded before and were from a wide range of courses, including pattern cutting (for garment construction) and fashion curation. Despite being complete beginners learning Python, When they understood the technical capabilities of AI they were able to thrive, he said.

The AI models used were generative adversarial networks (GANs), a type of machine learning where two adversarial models are trained simultaneouslya generator ("the designer") which learns to create images that look real, and a discriminator ("the design critic") which learns to tell real images apart from fakes. During training, the generator becomes better at creating images that look real, while the discriminator becomes better at detecting fakes. The application of this creatively allows computer-generated imagery and movement that look plausible (and likely aesthetically pleasing) to the viewer.

The students formed teams and devised proof-of-concept showcases of the uses of AI within the fashion industry, as well as being shown how and where to gather appropriate data to train their own algorithms. The course covered a range of AI applications, including training an AI model to classify items of clothing and predict fashion trends from social media, and style transfer to recognise imagery and create new designs. A pivotal output from the course was a virtual fashion show which was created from archive catwalk show footage but was placed in a new 3D environment with the models wearing new 3D-generated outfits. Drinkwater believes this is an example of how even those with limited experience in the field can collaborate to push boundaries.

Talking me through the workflow for the virtual show, Kazantzis explained that computer vision algorithms were used to estimate skeletal movement data from an archive fashion show video. This data was then turned into a 3D pose simulation using another algorithm and applied to a 3D avatar in Blender to replicate the models movement in the original video.

CLO software was used to design and animate the garments for the avatar models, and style transfer (which uses image recognition via convolutional neural networks, or CNNs, to recognise patterns, textures and colours then suggests designs and placement on the garment) was used to develop the textiles and final garment surfaces. The 3D environment for the virtual show was created in gaming engine Unity, which Kazantzis favours for its flexibility in design and diverse outputs, including VR and AR applications. He used particle systems to create atmospheric weather effects including fog and to create sea life, including jellyfish in the underwater environment. The show was brought together in Unity (once the animated garments and textures were imported), creating a final experience ready for export as a VR scene, a website which can be navigated in 360 degrees or as an AR experience in Sketchfab, for example. Its here that the power of AI to develop creative products, environment design and immersive content simultaneously seems most potent.

Katzantzis worked alongside Greta Gandossi, a 2019 graduate of the MA Pattern and Garment Technology course at London College of Fashion (who also holds an architecture degree) and Tracy Bergstrom (who has a data science background). The trio formed a pipeline for extraction of the movement from the archive footage, creation of 3D garments and import into Unity. The students who created this virtual fashion show alongside them were Mary Thrift, Tirosh Yellin and Ashwini Deshpande.

The AI course commenced in March and the proof-of-concept virtual show was completed in June. This seems incredibly swift, and prompted me to ask Matthew Drinkwater whether this type of content creation is affordable and feasible for small and large brands alike? Absolutely, he said, explaining that the project was created with a nominal budget. A caveat? The more GPU's you throw into the mix the more impressive your results are likely to be. Additionally, he recognised that the skill sets required are varied and that these factors would impact the timeframe. Despite this, he said: I would fully expect to see many more examples of AI appearing on the catwalk in seasons to come.

This proof-of-concept virtual show launches today on the fifth day of London Fashion Week, which is operating in a decentralized manner across digital and physical platforms. Most brands are choosing to Livestream a catwalk show happening behind closed doors, or release a conceptual or catwalk-style video online at a specified showtime. Data from Launchmetrics has indicated that engagement generated from these digital show methods has been much lower than for physical fashion shows. Could AI-generated virtual fashion experiences shape the future of fashion shows? Echoing others in the industry, Drinkwater said: It has long been evident that fashion weeks have needed to evolve to provide a much more varied and accessible experience. He went on to add: One fact is undeniable, the increased blurring of our physical and digital lives is going to lead to fashion shows that are markedly different from the traditional runway of the past.

Landmark uses of creative AI include the computer-generated artwork, which sold at Christies in 2018 for $432,500 (almost 45 times higher than the estimate). The artwork Portrait of Edmond Belamy was created by self-taught AI artist Robbie Barrat using a GAN model, working in partnership with Paris-based arts-collective Obvious. Barrat has also worked on an AI-generated Balenciaga runway show and trained a neural network on the past collections of fashion brand Acne Studios to generate designs for their AW20 mens collection. On the consumer and marketing side, there has been an expansion of deep fakes to place consumers into the content of the brands they covet. The RefaceAI app face swaps the user into branded videos, and recently generated more than one million refaces and 400,000 shares in a day during a test collaboration with Gucci.

Mathilde Rougier generative upcycled textile 'tiles'

On the experimental side and seeking to address sustainability through upcycling of waste, fashion design graduate Mathilde Rougier is using convolutional neural networks (CNNs) to design new textiles composed of interlocking offcut fabrics (akin to Lego) to create perpetually new from old fashion products. Her process is explained in detail in a recent Techstyler article and marks a new level of convergence between fashion design, AI and sustainability problem-solving.

Creative AI in fashion is in its infancy but is clearly gaining momentum. With the rapid adoption of 3D digital design in both fashion education and the industry and the ongoing restrictions in physical showcasing, the widespread creative use of AI appears to depend only on a critical mass of use cases to inspire industry adoption. If a group of students with no coding experience can develop this virtual show in just a few months on a nominal budget, the future of the fashion show looks refreshingly unpredictable.

View post:
Why Fashion Needs More Imagination When It Comes To Using Artificial Intelligence - Forbes

Related Posts

Comments are closed.