shot-button

Read Things To Do News

Bengaluru's brewmaster-turned-data scientist uses AI to decode beer palate shift

Imagine Getafix going through mathematical analyses of the Gaul village’s taste patterns, strengths and weaknesses while he mixes the magic brew.  Getafix’s creators, René Goscinny and Albert Uderzo might have dismissed the thought. But wouldn’t a wise brewer use information to create the perfect potion? This question has always intrigued Ankur Napa during his long career as a brewmaster and data analytics student. Like many, the Bengaluru-resident discovered his love for a cold one during his early years in college. “I was pursuing my BTech in biotechnology, and I remember when people would ask me about career plans, I would joke, “I am going to be a beer engineer”,” he laughs. As fate would have it, he ended up with an internship with Kingfisher. “It was supposed to be a casual thing, but I fell in love with beer,” the 38-year-old shares. Over the next two decades, he worked with some of the leading brands in the beverage industry from SABMiller to Budweiser, while also pursuing a Masters in wine brewing and alcohol engineering from the University of Pune. The numbers bubble The art of brewing is closely linked with data analytics, Napa informs us. The industry utilises analytics specifically in marketing and sales limiting it to the supply chain. It was his interaction with marketing and sales managers that introduced the brewmaster to algorithms and data. “Initially, I bristled at the idea of marketing men telling me how to brew beer. But then, I sat down and spoke with them. They would share details of an XYZ brand of ale that is doing well in the period from July to October, for instance. Therefore, we need to put out an ale with a similar flavour profile to compete,” he shares. This insight into consumer tastes set Napa on the path to creating iWort. Panels highlighting algorithm parameters used to measure the brewery process. Pics Courtesy/iWort He is not the first to tap into the combination of mathematics and brewing. “My inspiration dates back to the 19th century research of William Sealy Gossett. He was a graduate of chemistry and Mathematics at Oxford University, and head brewer of an humble Irish brewery, Guinness,” he laughs. It was Gossett who invented the formulae that allowed Guinness to predict the nature of their brew based on the average resin or pH presence of a pile of hops, use atmospheric pressure to calculate sales of ales and stouts. Napa’s technology is a 21st century version of the same thought. Starting with its first iteration in 2018, iWort is a cumulative algorithm that now uses AI programming to corelate collated consumer reviews, market information, sales data and beer recipes to suggest optimised solutions. Working out of a residence behind Manyata Tech Park in Nagavara, the founder explains, “If a survey of consumers at a brewery brings up the common review of the beer being too sweet, my algorithm will automatically suggest a reduction in the residual sugar in the brewing process in the next batch,” he explains. With these quick solutions, the brewer does not have to modify the entire recipe and start all over again. Vintage treasures  The other aspect is the historical data analyses that AI is capable of doing. Napa suggests that using the algorithm with expanded parameters such as upcoming high or low sale days, seasonal changes, historical consumption patterns can help focus the production strategy further. Ankur Napa The difference is that often brewers do not have access to this data. “Incidentally, all brewers are familiar with analytics and parameters. From pH levels, temperatures and fermentation time, they crunch numbers. However, there is a conservatism towards using marketing information in brewing; a certain amount of pride of the brewer as an artist,” he explains. Large brands are already working on algorithms assisted by AI to achieve results. “Global brands tapped into the phenomenon quite early. But there is still a hesitance among independent craft breweries. Secondly, there are limited viable options that a craft brewery can adapt to, based on their working model,” he adds. While Napa is currently working with small-scale independent breweries in the city, he informs us that the virtual platform does have the potential to adapt to the requirements of a larger brewery.  Things are on the turn though. With the rise of independent craft breweries across Bengaluru, the founder believes change is inevitable. “Many in the F&B industry are turning towards AI as an alternative tool to crunch numbers and derive predictions. From market trends to changing consumer responses, every data is being tapped in. However, not many will reveal it since there is a taboo about chefs or brewers turning to technology as a tool. They feel it reduces the human or artistic touch. But to me, mathematics is an art, the same as brewing,” he concludes.  What is it? iWort generates brewing solutions for independent craft breweries based on consumer responses, historical data, recipes, sales and marketing inputs. How it works? The data input by the brewers is tracked on a daily basis allowing the artificial intelligence (AI) and Napa’s algorithm to calculate any upcoming changes and pre-empt losses or needs to adapt accordingly. Who it affects/benefits? From effective marketing campaigns, altered recipes to curating new brews and beers for a selective market, the programme can help brewers be proactive in their approach. 

26 July,2024 12:41 PM IST | Mumbai | Shriram Iyengar
A view of the climate microsensor at the KRVIA terrace

Architects-turned-coders develop tool to predict Mumbai's micro-climate changes

Walk into the Kamla Raheja Vidyanidhi Institute of Architecture (KRVIA), and the cool vibe is a stark contrast to the corporate ambience of its corridors. The college is a hub of buzzing students milling about new ideas, designs and creations. This is not limited to the students though. Down the corridor from the classrooms, Assistant Professor Ankush Chandran is busy at work on a seemingly normal computer. The data he is reading though tells another story. The geomapped space in Aarey forest For the last two years, 31-year-old Chandran and fellow professor and collaborator, 55-year-old Aneeruddha Paul, have been working on a system that seeks to monitor urban microclimate. “We started the self-funded project in 2022, and have now set up the equipment and sensors. Over the last year or so, we have been gathering steady data for computation and analyses,” Chandran reveals. Tool building  The project took off from another idea for a possible collaboration with organisations working in urban design realm in 2021 that Chandran was part of — the geo-mapping of Aarey Forest. While the project is still in the planning stage, the methodology of using artificial intelligence (AI) to analyse the data set the team on a different path. Ankush Chandran working the data at the computer lab. Pics/Anurag Ahire The result is a set-up that tracks the biggest issue facing the city currently — climate change. “We have sensors set up on a circuit box with an antenna on the terrace to monitor different parameters,” Chandran explains. The two-metre long antenna tracks solar radiation, humidity, temperature, wind speed, wind direction, rainfall and atmospheric pressure. He explains that the team is also working to include sound sensors in the next iteration. “Data can vary drastically between neighbourhoods. The amount of greenery around the buildings, its location and even the structures can have an impact,” he explains. This data is relayed to the server through GSM module implanted with a SIM card. This is embedded in the sensor. “It is basic mobile phone technology that enables the server to collate the data. The relay time is live,” he says. Simplifying data  With a large amount of data and so many parameters, it was natural that the duo turned to AI for assistance. “The struggle was to corelate such vast data to each other. For instance, if we had to sit down and corelate the temperature statistics with humidity or solar radiation at different points of time, it would take us a large amount of manpower, time and effort,” he admits. There were challenges though. A close-up of the sensors at KRVIA The interface was built using ChatGPT4 and Python programming. “As architects, we are not proficient in coding. It was a struggle to input the data and the commands in the initial stages. But once the paths were set, the results came in quick,” the architect remarks. At a time when citizens are confounded by the changing climate behaviour, and sociologists are anxious about its effects on societal behaviour, this information can offer crucial details. “The idea is open this data up to public. We are currently in early talks with the corporators across the city to take the pilot project further,” he shares. From analysing climate changes, to unidentified neighbourhood issues or the effect of changing weather on standing structures and infrastructural systems at the neighbourhood and city level, the potential is astounding. For instance, he remarks, the rainfall in the city this season has been erratic. “It is the one thing we track as a whimsy. There have been moments when the rainfall is in extreme, and the very next day, it has dropped out of radar. That is not usual for a July monsoon,” he observes. “It can be helpful in designing urban landscapes, future plans for a neighbourhood designed using materials and equipment that take into account the micro-climate of the space,” Chandran states. As we conclude, students are milling about their classes. “They are already using AI with MiDjourney, CityEngine among others in designs. It is reshaping architecture studies,” he admits. Does that worry him? “I don’t think so. We did have this conversation a few years ago. Though we warn students against overdependence, it is inevitable. Why would you, as an educator or innovator, shy away from new technology? It is inevitable, and will soon be the mode,” Chandran suggests. It is hard to argue that point. What is it?Micro climate sensors enable researchers to track data on changing climate parameters like humidity, rainfall, temperature, solar radiation and atmospheric pressure  to help monitor changing urban climate. How it works?The sensors in the antenna collect data and transmit them through GSM chips to a remote server. The data is compiled and the corelation between its parameters are measured by a programme module of ChatGPT4 to produce visual graphs and projections. Who it affects/benefits?The technology can be used in a wide scope of fields from climate studies to urban research and architectural design.

26 July,2024 12:39 PM IST | Mumbai | Shriram Iyengar
Mihir Gilbile with his telescopic camera during an astrophotography expedition in Ladakh

As stargazing fad grows, astronomers leverage AI for enhanced galactic imagery

For 27-year old Mihir Gilbile, gazing at stars is both a professional and personal joy. As the tour faculty at the Horizon Astronomical Society in Vangani, in Thane district. Gilbile often takes participants through the mysteries and techniques that go into stargazing and astrophotography. “It is not as simple as taking photographs of landscape. There is a lot of mathematics and science that goes into it,” he explains. Since it was first experimented with in the early 19th century, astronomers have used high-powered telescopes and lenses for space exploration. Naturally, stargazers have been experimenting with technology from its early years. Today, astronomers around the world are turning to Artificial Intelligence (AI) as an innovative tool to get sharper images of space constellations. For a clear view An image of the North American Nebula (before processing) The exercise is a delicate one with many variables such as the lens, exposure and the nature of the targeted star or galaxy targeted that come into play. “We use computerised telescopes that can be handled remotely. These are trained at a constellation or galaxy, and left unobstructed so that they can gather data to create the right image,” he shares. The exposure can range from anywhere between 10 minutes to five hours. In this duration, the equipment can face multiple hurdles such as heating up, atmospheric interference and data lags that affect the picture. A key part of the experiment then is the processing. “You need to understand that distance is often irrelevant. Since galaxies and nebulas emit light on different wavelengths, you need filters to truly see them as humans do,” he explains. The telescopic photographs are often in black and white. These are then subjected to multiple filters such as hydrogen, RGB (Red Green and Blue) channel filters, sulphur channels to clear the image for the human eye. Future technology  Since 2018, scientists and astronomers have been experimenting with software to simplify this process, he shares. Software like Asiair, Luminar AI or Noise Xterminator serve to spruce up pictures taken in a short time. “If you see, traditionally, astrophotographers would stack the images and process it on Photoshop. The quality would depend on the individual processing, their technique, experience and understanding. Currently, AI is used to remove the noise. It reads the image better, and differentiates much better between a normal pixel and plain noise,” Gilbile explains. It also eliminates the manual hours spent poring over pixels painstakingly to get an accurate image. Most software, he adds, also come with features for pre-processing. This allows them to set parameters and limits to focus specifically on an image, eliminating any further distractions, he shares. An image of the North American Nebula (after processing) “I turned to AI software in the last year, and the results have been phenomenal. However, it has been steadily on the rise. While scientists did not specifically know it was AI, as the versions have evolved, it has become a standard use case across the scientific community,” Gilbile remarks. Does that change the human effort of discovery, we wonder. “No,” says Gilbile, “Science is about accuracy. As long as technology allows us to get accurate images, clear data and enables us to read and understand it better, it can prove only beneficial.” A clear image of the Lagoon and Trifid nebula processed by AI software What is it?AI-enabled software helps in cleaning up and sprucing data and images captured through telescopic lenses for clearer images of constellations and galaxies. How it works?The software processes the data and the images faster, based on previous analysis. It helps to eliminate noise, pixelation, or any interference created during the camera exposure, with minimal errors. Clear reading of data and set parameters enable a sharper image post-processing. Who it benefits?For now, the technology is benefiting astronomers and stargazers who are turning to it as a tool to simplify their processes. As the medium evolves, hobbyists speculate how it might change the nature of stargazing with specific and targeted parameters and accurate focus.

26 July,2024 12:07 PM IST | Mumbai | Shriram Iyengar
Clyde D’Souza (left) explains the workings of the virtual experience with the writer

Would you digi-dance with me?

Stag entry or couple? The simple question that has irked party-goers in the city for decades, might soon be redundant. Picture this: You walk into a club. The place is dead. It’s an evening on a working day; you should have known. But instead of calling it a day, you slip on a VR headset. The club transforms into a city under the sea, and in the blink of an eye, your friends, in the form of ‘digital surrogates’, are now all here. To test this concept that city-based visual DJ Clyde D’Souza calls the future of clubbing, we head down to a Bandra haunt on a weekend. The venue is scanned using the headset to identify surfaces and objects  The set-up behind this complex sounding idea is a simple VR headset and two handheld controllers that act as motion sensors for your arms. The magic lies in the process. With the headset on, D’Souza first walks around the space. This allows an AI assisted model to identify the walls, corners and the roof of the compact pub. With the space mapped out, it’s time for a makeover. Clyde D’Souza A whole new world When we finally wear the lightweight headset that straps on like a pair of swimming goggles, we see the venue as it is, almost as if the headset were see-through. It’s only when we turn towards the windows that we’re treated to a view of a vibrant fish and coral life thriving outside. At the other end of the venue, the entrance of the club has been transformed into the pearly gates. Given the increasing monotony of spaces in the city, we’re already sold on this idea. A mystical gate placed at the entrance of the club welcomes the user wearing the headset to the virtual experience But for D’Souza, the key lies in dirrogates, digital humans that you’ll find inhabiting this new world. He explains that these characters can work two ways. “Imagine I had a friend who wanted to join us from home. By donning a headset from home and logging into the venue, he could send a dirrogate that we would see on our headsets here. In turn, he’d see us in real time as well.” How the character moves and interacts is dictated by motion sensing sensors on the headset and two handheld controllers. An empty wall at the venue turns into an underwater window inside the experience  But how do three sensors let you control a character in its entirety? D’Souza reveals that an AI model uses predictive technology to draw assumptions about the parts of your body that are not tracked. “For instance, if you stick your hands out, the model reads it and refers to human skeletal capabilities. This way, even though your shoulders weren’t tracked actively, they move in a very human way.” Looking ahead  It’s the second-use case, however, that catches our attention. For those days where you have last-minute apprehensions about heading out to party, the dirrogates come in even handier. You can simply log in to the venue from your home and leave your set-up alone. A Work-From-Home kind of option for party-goers, if you will. “Based on your previous activities and moves, the dirrogate constantly learns your ‘style’. When left alone without inputs, it mimics these mannerisms,” he explains. The virtual characters, aka dirrogates, appear in the space next to individuals at the venue. Pics/Shadab Khan While the in-experience visuals might not look realistic, and the animated characters felt almost cartoonish at times to us, the creator notes that it’s a matter of time before things get more realistic. “We’re still in the nascent stages. I have been pushing for this technology to be used in clubs across the city to get more inputs from diverse audiences. It’s only through mass adoption that things can get better,” he shares. Built using game designing platforms Unreal and Unity, that allow creators like D’Souza build 3D environments, the Andheri resident is now in talks with some familiar tech giants to help push the concept to a larger audience.     What’s holding venues back from embracing the idea? The expenses involved, we learn. With each set-up costing nearly R40,000, D’Souza believes club owners have little incentive to make the gamble at this early stage. When we ask the owner of the Bandra haunt we’re in, he agrees. “We recently did a test run of the concept where we had three headsets that were passed around the club on a packed night. This isn’t the kind of innovation that you come across often. The crowd was all for it. If the headsets were to get cheaper, I can see exclusive events taking off, at least at my venue,” he states. “I’m working on making it affordable by porting the same experience to Indian-made headsets. The only problem is that there aren’t any sturdy ones in the market today. I am in talks with a few Indian brands to see if we can work something out,” he reveals. Partying with VR headsets on might seem impractical for some, or unnecessary for others. The visual DJ who has been spinning tunes in the city since the ’90s believes otherwise. “There was a time when screens in a club were unheard of in Mumbai. Today, you can’t find a club that doesn’t run visuals if you tried to. It’s a question of when, and not if, when it comes to a concept as novel as this. The party has just begun,” he says. Log on to @dirrogate What is it? Dirrogates are AI-powered virtual characters that resemble their owners and can join real-life parties around the world on their behalf. How it works? A blend of AI and VR technologies, using headsets and sensors, enables dirrogates to be present at venues with their friends.  Who it affects/benefits? For regular club-goers, it can provide a visually enhanced experience and for homebodies, a way to join real-life parties remotely without physically being at the venue.

26 July,2024 11:51 AM IST | Mumbai | Devashish Kamble
Gajendra Boriawala (with the walkfit stick) tests out the equipment as the teens measure the data. Pics/Ashish Raje

Treating mobility disorders the AI way

The simple act of taking a step involves 200 muscles functioning in a coordinated motion. Students might groan upon hearing yet another complex factoid during their Biology lecture. Rahi Shah and Hriday Boriawala are different. Their latest invention is built around this unique fact. The two 17-year old students of RN Podar International School in Santacruz are inventors of Walkfit that uses pressure sensors to monitor movement, pressure and balance. The idea, Shah reveals, was born through a familial bond. “Both of our grandparents underwent operations for mobility a couple of years ago. My grandmother had a hip replacement, while Hriday’s grandmother had a knee replacement,” she shares. During the recovery period, the teens observed that the senior citizens often found it difficult to express their issues or experiences. Gyro sensors measure direction of tilt “Many older people cannot express what they experience, and sometimes, won’t even speak up about it,” Shah observes, “This makes it difficult for doctors or even loved ones to truly diagnose the efficacy of treatment. We do not find out till the problem is unavoidable.” As always, the kids turned to technology for ideas. The duo are also students at the Omotec centre in Vile Parle, where they are familiarising themselves with coding and robotics. A close-up shot of pressure sensors on the handle It is at the Vile Parle centre that we catch up with the duo, along with Boriawala’s grandparents who kindly joined us for a demonstration. Boriawala remarks, “I was already learning coding and machine learning at the centre, and I am also interested in electronic engineering.” The two interests, plus a desire to help their grandparents gave them the idea of a monitoring device. Testing the equipment At first glance, the walkfit appears to be a simple walking support stick with its four-prong grip. Look closely and the wires running down the handle come into view. “The prototype is the circuitry, not the stick,” he points out. “We wanted to create something small that could be easily adapted onto any walking stick. It is feasible and economical,” Shah adds. It took the duo two years to create a working model. Rahi Shah and Hriday Boriawala working on their invention As they hand over the stick to Boriawala’s 80-year-old grandfather, Gajendra Boriawala, we are able to observe it in operation. The stick is armed with two pressure sensors on the handle and four on the bottom grip — one on each leg. In addition to this, it holds a module with gyro sensors towards the bottom. “These help to co-relate pressure readings with angle of tilt to predict the chances of a fall,” Shah explains. The data derived from these sensors is relayed via Bluetooth to their laptop where it is displayed on a pressure heatmap, along with projected readings. The live relay of the data is analysed by a machine learning module and an algorithm coded using  Python programming to derive projected analysis. “Machine learning is a neural network that derives values based on the parameters you set, and projects possible progressions,” explains Boriawala. Shah reveals, “Neither of us are medical experts. So, we showed the videos, data, readings and analysis predicted to a qualified medical professional, Dr Anokhi Shah, MD, and also tested over 50 patients in several old-age homes across Vile Parle. They helped us figure out how to read this information. For instance, if point A is where the pressure reading is, then X is a possible diagnosis.” A heatmap depicts the pressure points We watch the colour changes on the pressure map on the laptop with every step. Boriawala elaborates that the Force Sensitive Resistors (FSR) on the handle and foot of the stick measure the pressure applied by the holder and its weight bearing. “For instance, we learned from the doctors that if the patient has a hip problem, they are bound to lean to the front. Similarly, if the pressure on the front two prongs is beyond a defined value, then there could be an issue with their lower back,” he explains. Potential benefits  This opens up immense possibilities across the board. “The data will help medical professionals understand the nature of the problem and even facilitate early diagnosis of motor disabilities. They will also be able to monitor day-to-day changes, and be proactive in their treatment plans with accurate information,” Shah points out. While they have been working on the invention with financial support from their families, they already are catching the eye of experts. The duo showcased it at the IIT Tech Fest and the World Stem and Robotics Olympiad (WSRO) in 2023. Armed with AI and Machine Learning, these young geniuses are already planning big. “We hope to keep working on this, and develop it into an app. Maybe the information could be sent to family or caregivers and alert them in case of emergencies, or help them monitor progress even when they are away from the patients,” the Vile Parle-based Boriawala remarks. Ask them if they are apprehensive of AI stepping into everything, and the quick answer is no. “We understand the apprehension, but technology is a tool and needs to be used for the good,” Shah remarks as we wrap up another walk. What is it?The walking stick is equipped with pressure and gyro sensors to track a patient’s challenges in mobility, direction of tilting and pressure points. How it works?The sensors monitor the pressure and direction during every movement. The gyro sensors track the degree of tilt to predict the possibility of a fall. Who it affects/benefits?Who is it for? The technology can enable a detailed diagnosis of mobility issues using a heat map and pressure. It can also be developed into an emergency alert mechanism to help senior citizens.

26 July,2024 11:49 AM IST | Mumbai | Shriram Iyengar
Hitul Mistry

Boost recruitment, sales & marketing with AI’s new tools for business

On an average, recruiters spend 31 per cent of their time screening candidates, which could be better utilised elsewhere. Delays caused by unexpected leave, public holidays, and new recruit training can exacerbate this time loss,” shares Hitul Mistry, founder of Digiqt Techhnolabs. The 32-year-old’s recently-launched AI tool introduces what they call a candidate scoring system. Mistry’s recent venture uses AI to recruit employees Mistry explains, “Unlike humans, AI can screen candidates 24/7 without bias towards attributes like university or location, focusing solely on qualifications and skills listed in job descriptions. Our innovative solution features a web-based interface where HR can effortlessly upload job descriptions and candidate curriculum vitae (CVs) in pdf or doc formats. Once uploaded, the system generates a score out of 100 for each candidate, streamlining the screening process. Under the hood, a sophisticated CV data extraction nlp algorithm transforms unstructured CV data into a structured format, storing it in a vector/document database. The structured data is then matched against job descriptions.” The AI scores each candidate based on job requirements, responsibilities and qualifications This, he says, reduces unconscious bias, and ensures consistent screening. AI can also scale efficiently to handle large volumes of CVs, as seen during increased attrition rates in the COVID-19 pandemic, maintaining performance without delays. “Such delays, often up to 10 days, can lead to a 10 per cent loss of ideal candidates who accept other job offers. With AI, priority screening and consistent turnaround times eliminate backlogs, resulting in a quicker and more efficient screening process.” The turning point The chat bot acts as 24/7 sales assistant that can solve the agents’ doubts throughout the day The Ahmadabad-born software engineer currently works out of an office in BKC with his sales team and IT engineers. Passionate about solving business problems through technology, he discovered a gap at his initial jobs at an insurance broking firm. “I managed terabytes of data and provided various teams with crucial business reports and insights. One day, during a conversation with the business team [of the firm], I discovered that they often had more leads than the call centre could handle, resulting in prioritising the wrong leads. This issue significantly impacted their conversion rates. On noticing this challenge, I began learning about AI and implemented a Proof of Concept (POC) on a small subset of leads. This solution dramatically improved the call centre’s productivity. Since then, I have delved deeper into AI, driven by the desire to harness its potential to solve complex business problems. Personalised AI videos help improve insurance pitches as per the customer’s requirements Other projects by Digiqt Technolabs include a sales virtual assistant and an interactive AI video. While the former, he says, supports field sales heads and provides 24/7 instant responses to agents’ questions, the latter is an interactive video solution that personalised content for each customer. This, too, was inspired by a problem he faced personally. Focus on the solution “Upon receiving an insurance quote from a life insurance agent, I found it difficult to understand due to numerous hidden terms and conditions. This challenge inspired me to create a personalised video that simplifies the information by eliminating unnecessary details, and customising elements like voice, interactions, clicks, and graphs. The video allows viewers to click anywhere for more information and includes real-life scenarios that explain the importance of buying term insurance, complete with personalised financial animations. For instance, it could illustrate the potential impact on a family if an insurance policy isn’t purchased in a timely manner,” he explains. This solution is also ideal for on-boarding customers, with AI assistance available directly in the browser while watching the video. The video can include a form that the AI helps to fill out, making the process seamless and user-friendly. This technology can help fintech companies explain their products more effectively and assist customers in the on-boarding process, ultimately enhancing customer understanding and engagement. It customises elements like voice, interactions, clicks, and graphs, and presents real-life examples to help new customers understand the importance and the various parameters involved while getting insured While the integration of AI into everyday life promises to revolutionise how we work, live, and interact, he believes that it also demands careful consideration of ethical and security challenges. Looking ahead, Mistry is confident. “AI innovation is progressing at an incredibly rapid pace, with new and improved models being introduced every month. Continuously adapting to these advancements can be costly and time-consuming. Instead of focusing solely on setting a precedent for future technology, we should prioritise how effectively we are solving business problems. The true value lies in delivering comprehensive solutions that address these issues thoroughly.” What is it?Three AI tools help companies manage recruitment process, appoint a chat bot as a 24/7 sales assistant, and create personalised AI videos for a better understanding of insurance quotations.  How it works?The recruitment tool scores candidates based on their CVs without bias. Sales bot provides a 24/7 service and answer’s agents’ questions. The personalised AI video, on the other end, helps break down difficult jargon, and terms and conditions in insurance quotations, while presenting to the customer real-life examples. Who it affects/benefits?The aim is to help businesses tackle problems by effectively reducing time put into sales, hiring and forming personalised insurance pitches for on-boarding customers.

26 July,2024 11:47 AM IST | Mumbai | Devanshi Doshi
The Bandra-based popular culture haunt uses AI-powered software in 3D printers to create 3D models for characters from famous shows, films, comics and anime. Pics/Anurag Ahire

Bandra entrepreneurs takes 3D printing one notch higher with AI

In a lane in the Pali Hill neighbourhood, a door opens to the Marvel and DC-verse daily. Here, heroes are made every day, quite literally. Wonder By House of 3D, created with the aim to provide a space for the growing comics and anime communities of the city, primarily caters to those who like collecting figurines of their favourite superheroes, movie characters, and now, of themselves.  With 13 printers installed in the space, founders Delvin Lobo and Aditya Anand integrate AI to make 3D models of characters and elements from popular culture. The duo is licensed to release such models for Stranger Things, The Archies, DC, Marvel, Naruto, and Squid games among others. The aim, they say, is to make Indian collectors independent of exported products that are also usually overpriced.  All-in-one assistant “AI-powered software tools are significantly improving the design process in 3D printing. These tools use machine learning algorithms to optimise designs for functionality, material usage, and production efficiency. For example, generative design software can create multiple design alternatives based on a set of input parameters such as weight, strength, and material constraints. This enables us to explore a broader range of innovative and efficient designs that would be challenging to conceive manually,” 43-year-old Anand explains.  Their recent venture allows them to record a person with an AI-based 3D scanner, and create 3D models within a week. “The entire process from scanning and editing the grab to printing takes seven working days. If this is done once, we can deliver next orders in just 48 hours,” they explain, “One of the unique advantages of 3D printing is in its ability to produce customised and personalised items. AI enhances this capability by using data from various sources, including user preferences and biometric information, to create tailored designs,” shares Lobo, 38. Their new venture allows them to print 3D models of people within a week. For this, they use a 3D scanner; There are 13 printers in the space that print these 3D models Although a booming market, maintaining 3D printers is no cakewalk. Even here, AI comes to their rescue. “AI-driven predictive maintenance systems are increasingly being used to monitor the health of 3D printers. These systems analyse data from various machine components to predict potential failures before they occur. By anticipating and addressing maintenance needs proactively, companies can minimise downtime, reduce repair costs, and extend the lifespan of their equipment,” says Anand.  From design to the final output, the tool works as an extension to their abilities. It also helps the duo select the correct material for different models. They believe that with integration of AI in the industry, countless micro processes are taken care of automatically, with minimal manual intervention. A bright future “The fusion of AI and 3D printing is propelling the technology to new heights, offering unprecedented levels of precision, efficiency, and innovation. As it advances, its applications in 3D printing will undoubtedly expand, leading to even more transformative impacts. At House Of 3D, we are at the forefront of this transformation. AI enables us to optimise designs and streamline production processes, ensuring precision and efficiency like never before. Coupled with the versatility of 3D printing, we can introduce intricate and personalised products to life at a fraction of the traditional cost and time. This powerful combination not only opens up new possibilities for innovation but also supports sustainable and scalable production practices. We are excited to lead this charge into a new era of manufacturing, where creativity and technology converge to shape a better future,” they conclude.  What is it? Bandra-based Wonder by House of 3D prints collectibles like figurines and masks, among other elements, using Artificial Intelligence.  How it works? From design and manufacturing to the final output, AI looks after the micro processes required in 3D printing, effectively reducing production time with minimum manual intervention  Who it affects/benefits? The 3D models created as collectibles are for the growing popular culture communities based on fan favourite anime, comics and movies and web series. 

26 July,2024 11:23 AM IST | Mumbai | Devanshi Doshi
Illustration/Uday Mohite

The AI Takeover: Publishers share insights on artificial intelligence in writing

For Meru Gokhale, founder of Editrix.ai, an Artificial Intelligence-enabled editing platform, the idea to launch a game-changing tool in India’s vibrant publishing landscape, emerged from a glaring human resource shortfall that she had encountered in the publishing industry. A senior editor who has worked in some of India’s leading publishing houses, she found it increasingly difficult to find and retain experienced, qualified editors. “Traditional apprenticeship models for training new editors were faltering, especially with the rise of remote work, making it crucial to find more structured and accessible alternatives.” The idea was to streamline the editing process while also offering accessible training opportunities, and contributing to a more inclusive publishing landscape. Editing made easier Naturally, AI’s growing presence is a hotly discussed topic in the cabins and cubicles of grammar Nazis across some of India’s largest publishing houses. Gaurav Shrinagesh, CEO-India, SEA&MENA, Penguin Random House, believes the possibilities are immense, “AI has the potential to revolutionise the way we create, distribute, and consume content. We are actively exploring how it can enhance our operations, support our authors, empower our teams, and improve the reader experience, all the while protecting the intellectual property of our creative contributors.” At Roli Books, the intent is to use AI for proofreading and fact checking. “However, other tasks such as structural edits, and translations that require more thought, intellect and understanding are best done the old-fashioned way. That said, as prompts get more refined and the technology improves, I am sure the use of AI will be more prevalent for tasks across the publishing process,” shares Priya Kapoor, its director, who is clear that AI can save time and resources, if used well. Shrinagesh believes that that though publishing is inherently a human-centric industry, driven by skilled professionals, such editing tools can significantly streamline the editing process, ensuring consistency and reducing errors, thus enhancing efficiency. Meru Gokhale, Gaurav Shrinagesh, Priya Kapoor and Tina Narang Gokhale is confident that AI can play a positive role with editing tasks, “One of the most obvious benefits is the promise of increased efficiency for overworked editors, leaving more time for creative work. By automating mechanical editing tasks, AI tools can free up human editors to dedicate more time to the finer nuances of storytelling, such as character development, plot and thematic depth. This shift can lead to higher quality books and a more fulfilling experience for both editors and authors, and of course, readers.” She is gung-ho with the feedback so far, “We’ve already welcomed our first batch of users, and the response has been great; we are, of course, always learning from them.” Chandler Crawford, founder, Chandler Crawford Agency Inc. is one such beneficiary. “I used Editrix to write a submission letter for a manuscript I represent, and when my author read the letter generated by the tool, he said: “As much as I hate to admit it, I think AI has done a nice job.” ” For Malashri Lal, writer and former Dean, University of Delhi, the quality of editing got her vote. “It’s intelligent and sophisticated, discards jargon, and is very polite in making suggestions for improvement.” Balancing act However, publishers and editors have some tightrope-walking to do. “Language and translations are intricate and nuanced tasks that benefit from human touch. AI can assist and optimise, but it cannot fully replace the expertise and intuition of human editors and translators,” reminds Shrinagesh. Kapoor echoes the sentiment, “While there’s a lot to be excited about AI, we need to proceed with caution.” She spells out areas like planning, market research, competitive titles, creating pre-publication information, blurbs for marketing purposes where it can be useful. “In short, use it for work that may be considered tedious, as AI cuts the amount of man hours spent on some tasks.” While Roli Books has flirted with it for preliminary research and fact-finding, it hasn’t yet used it for rigourous tasks.  At Penguin Random House, a committee oversees its AI strategy, ensuring ethical standards, internal guidelines and responsible use. “Transparency is crucial in integrating AI into the creative process, allowing us to protect copyrighted content while leveraging AI assistance. We are in the exploratory phase of understanding its role in creativity but we don’t wish to limit authors with restrictive clauses and are aware of its potential to become a valuable tool in the creative process.” Tina Narag, publisher, children’s books, HarperCollins India, prefers the wait-and-watch approach. “I think at the moment the approach is largely to understand how AI is being used and what are the challenges associated with its use. With children’s literature, it might be useful to generate reference material, activities, illustrations and images, and so on. But in the story space where developing the story, the plot, the character and more is not mechanical or process-driven, it would still be author-led.” Having been on the other side of the table for decades, Gokhale is fully aware of the challenges, “This tool is designed with a deep respect for the sanctity of the author’s voice unlike others that may impose standardised templated editing. It takes a highly personalised approach and is trained to preserve the author’s voice and intent. It offers tailored editing suggestions that align with the author’s unique style and specific goals for their book. Importantly, it is envisioned as a powerful ally to human editors, not a replacement.”   A question of ethics A tempting prospect for some authors would be to use AI to wing a dream debut or a racy bestseller. Do publishers accept manuscripts created by AI-enabled tools, and can they actually ‘read between the lines’ if technology has been used? Turns out, publishers are equipped to detect, and also red-flag such projects. “Ethics and regulation are at the heart of the debate around AI, particularly in creative endeavours. This is a space to watch as organisations are working on creating guardrails to protect those who create. Roli does not accept AI-generated images or manuscripts — perhaps because it’s a matter of quality,” reveals Kapoor. With the confidence of a seasoned publisher, she believes that intuition comes handy. Besides, there are tools to spot AI-generated text and images. Narang, like Kapoor, relies on the tried-and-tested editor’s eye to call out such manuscripts, “There would be instances — whether an uneven tone, a marked variation in writing style across the manuscript, non-linear jumps in the narrative etc. which would be red flags, and prompt one to reassess the manuscript and check for originality.” At Penguin Random House, checks and balances are implemented to ensure that authors are the sole creators of their manuscripts. “We are actively monitoring this evolving space to balance fostering creativity and maintaining authenticity,” reveals Shrinagesh.  Tech our word Gokhale warrants careful consideration while moving forward. “AI can be a valuable supportive tool for authors and editors but it’s important to keep originality as your North Star. Our platform is designed for people who want to professionally edit their own original work.” She hopes for a future in literature and publishing that is driven by a vision where AI empowers both authors and editors. In this emerging scenario faced by publishers, Shrinagesh drafts the blueprint, “AI will continue to advance and provide opportunities to improve efficiency and discoverability. That said, I believe the human element is irreplaceable, and we must embrace innovation mindfully without compromising on authenticity, integrity and the rich tradition of storytelling.” Afterword: No human writers or editors were hurt while creating this article NEW AI TOOL Editrix.ai, a new AI-enabled platform, offers solutions to streamline editing processes, including assistance for mechanical, voluminous tasks, freeing editors up for more creative tasks BALANCING ACT India’s publishers have adopted a cautionary approach for now, and are in the exploratory stage while seeking to maintain a balance between human input and technology STRICT NO-NO Publishers can refuse manuscripts written by authors who have used AI tools

26 July,2024 11:20 AM IST | Mumbai | Fiona Fernandez
The app gives the user audio instructions to crouch, bend, or look around to explore interesting parts of the experience

Classroom 2.0 revamped with artificial intelligence and virtual reality

Computer Science graduate Omkar Pimple’s first exposure to a broken education system came in the form of an under-equipped science lab during a visit to his hometown in Palghar district nearly two decades ago. Truth be told, this realisation could’ve come in most smaller schools in Mumbai or its suburbs, where primitive science labs are inaugurated and forgotten every year. The pitfall in most cases, Pimple believes, is the cost and labour-intensive maintenance. “This is the condition of the schools that have a science lab. Think about the ones who don’t,” the 32-year-old sighs. The VR headset and controllers Pimple’s AI-assisted learning lab, Myracle.io, addresses both concerns with one innovation — a virtual AI-assisted science lab, complete with a virtual microscope, life-like 3D models, and now, an AI teacher who evolves with the student’s progress. Available on the Meta Quest headset as well as smartphones, the app transforms any four-walled room into a science lab. The learning experience The view from the headset shows a 3D model of an atom, while a student looks at the same experience from her smartphone Once logged into the experience, students get to choose from a list of lessons to learn virtually. Choosing the chapter on introduction to microscopes, for instance, would turn the camera on, allowing students to pan the device across the room to find a table and a microscope placed on it. Following the instructions on the screen, or inside the headset, the student is guided to walk towards the setup, and interact with various parts using handheld controllers. A student jumps to reach a button inside the space station Currently, the app offers experiences that align with CBSE and SSC curriculum. What’s new, however, is the style of instruction. “It’s a well-documented fact that no two people can learn the same way. With that in mind, we’ve incorporated an AI and machine learning model that constantly logs the student’s style, pace, and level of comprehension. The course then decides how to shape itself to suit the learner better,” the founder, who is currently working from Germany to keep in touch with new developments in AI and VR, elaborates. A live relay of the virtual experience allows the educator to keep track of the movements Has AI finally come knocking on the doors of the teaching community, we wonder. While Pimple laughs it off, a new feature tells us otherwise. With an upcoming feature, students will be able to converse with a virtual teacher who will be standing alongside during an experience. These aren’t real humans, but AI bots that can listen, and help the child with his queries and doubts. Interestingly, the founder informs us that teachers have taken to the concept well. “An AI bot cannot realistically replace a teacher. But it can come in handy as a supplementary tool for teachers,” he notes. Science for all With great power comes great responsibility, they say. The founder, who spent his teenage years experimenting with programming languages and working on coding projects with his colleagues at Wadala’s Vidyalankar Institute of Technology, seems to know it well. “Focusing on any screen for a long time can put you in a hypnosis-like state where you tend to lose your sense of orientation. We have an AI-assisted environment detection system in place which uses the accelerometer and gyroscope in the smartphone or headset to detect when students are in dangerous environments,” the founder reveals. If a student is approaching a flight of stairs, an object in the room, or any obstacle that may cause hurt, the experience pauses and flashes an alert. A team member guides a user through a virtual walkthrough of the International Space Station At a recent workshop conducted at the Fellowship of the Physically Handicapped in Haji Ali, these features underwent a comprehensive test drive. “Individuals with physical disadvantages stand to benefit from the virtual experience greatly because they may find it difficult to physically walk up to machinery and operate it in real-life,” Pimple explains. While the focus, in Pimple’s words, remains on offering physics, chemistry and biology lessons to rural and less privileged communities, the team is currently expanding their content library to cater to a larger audience. The future looks optimistic, “Gen Z is adopting the digital way of life at ages as young as five. The key is to make learning science a part of this experience, not something they need to take time out for. We are building storytelling, mathematics and geography modules soon.” Omkar Pimple. Pics/Atul Kamble LOG ON TO @myracle.ioAVAILABLE on Play Store and App Store What is it?Myracle.io is a virtual learning app that uses a mix of AI and VR to create a 3D, interactive, and evolving learning experience. How it works?The app uses an AI-based model to identify how a student learns and reshapes itself to fit their pace and style. Using VR, 3D models that students can interact with, are projected in physical spaces in real-time. Who it affects/benefits?Students in schools that do not house functional science labs, or struggle with maintenance, stand to benefit from the app that brings the lab experience to any room, including their homes.

26 July,2024 10:37 AM IST | Mumbai | Devashish Kamble
Co-founder Roshan Raju demonstrates how his sizing product can be used on a smartphone globally

Shopping meets storytelling with this new AI start-up

With denims getting wider and T-shirts becoming baggier, Gen-Z fashion is known to prioritise comfort, community-building, and sustainability. The generation born into a digital world thrives on e-shopping. But often, it takes one click on the ‘return’ option to send their sustainability goals down the drain, in a landfill. Reports claim that over five billion pounds of waste is generated through returns. The problem, Roshan Raju, co-founder of start-up Imersive, believes is usually with sizing. “Brands lose over $200 billion on returns [of apparel] annually. Of this, 66 per cent returns are related to sizing concerns,” shares the Chennai-based 26-year-old, who is working on reimagining e-commerce through Augmented Reality (AR), WebXR, and Artificial Intelligence (AI). Of the three products released since the company’s inception in July last year, the most recent one provides a solution to reduce returns caused by sizing, while also enhancing and gamifying the overall online shopping experience. Raju holds up the phone as this writer stands in front of the camera to begin recording The right size To try it out, this writer meets the young entrepreneur at a five-star lobby in Bandra. When we hear of technological jargon like AR, WebXR and AI, we expect a full set-up, and devices and equipment like VR headsets. However, Raju turns up with just his mobile phone. The idea is to make e-shopping more efficient without needing extra devices other than the ones that are already being used by the consumers. “Our products are in plug-in and play format. You don’t need separate apps either. The website you wish to purchase apparel from will plug us in, and you will be able to use the feature on your phone,” he explains.The focus is equally on saving the consumer’s time. With a click on a link, and after giving essential log in details, we are ready to be sized within ten seconds. Raju clicks on a T-shirt, enters our height, and asks us to stand in front of the camera. We rotate twice, and the AI is ready with our measurements and the recommended size to buy for the selected T-shirt as per the brand size. These measurements include everything from shoulder breadth, hip, neck, waist and chest circumference, to head, bicep, wrist, calf and ankle circumference. “As you rotate, AI makes a 3D model of you, and sends the readings to our set-up in the office [in Chennai]. No matter what clothes you wear, the system will be able to measure you with a possible difference of half cm. We guarantee 90 per cent accuracy — which is also what people look for in ready-to-wear clothes,” he suggests. The product also lets you edit the measurement if you prefer looser/ tighter fits or know your exact measurements. While this product was first demonstrated in Bandra at streetwear festival All You Can Street earlier this year, Raju gives us a glimpse of another product. The AI tool takes 10 seconds to record and read your measurements Make shopping fun again This one, he says, caters to both online and offline shopping. “Imagine you’re in a mall, and pass by a mirror outside a store. You pause to catch a sneak of how you look, but what you see on yourself are not the clothes you initially wore, but a pair of new launches by the store,” Raju says. He turns on his front camera, and shows us how that works. The writer selects a sweatshirt, and our clothes change on screen. “This feature will show you how you will look in the selected apparel, if it fits right, is according to your liking, etc, in real time. It will also display if the product is available in your size. All that’s left to do then is purchase it from the counter,” he explains. The size chart includes elaborate measurements for chest, height, hips, ankle and wrist circumference, among others. It also recommends the size to buy for the selected apparel While AI is being explored in various fields, very few are taking it ahead in fashion in the country, Raju notes. “Most of my clients are from Mumbai. If you were to ask me, I’d say Mumbai is the fashion capital of India, followed by Delhi. People here are accepting of change and open to experimentation. But the problem with online shopping is that it is flawed. I understood this as a postgraduate student, and hence founded Imersive with Giri [Vedagiri Vijayakumar] to integrate storytelling into fashion through technology.” The first product launched by the duo gamifies shopping centres in AR. This could be experienced through VR headsets or your mobile phones globally. It allows you to experience physical shopping online, where you can walk around the virtual store, check out their offerings for sale, make purchases in real time, or hang out in the space.  Another product allows you to try on clothes virtually on a mirror or your phone’s screen. Pics/Shadab Khan “Online shopping is booming. But returns cause a huge setback to brands. I won’t be surprised if they stop accepting returns in the near future. Our products enhance online shopping and allow brands to keep their carbon footprints in check. We chose technology because it isn’t just a futuristic tool that might work; it has already proven to work across generations and will continue to do so with innovations like these,” Raju amuses us.   Fivebillion pounds of waste generated through returns end up in landfills What is it? Three tools enable size recommendations within ten seconds and gamify online shopping experience through storytelling. How it works? Using WebXR technology, these tools can be accessed globally on smartphones. Who it affects/benefits? Aimed at reducing returns substantially and enhancing online shopping, the tools help fashion labels keep their carbon footprint in check and build a community with their customers.

26 July,2024 10:29 AM IST | Mumbai | Devanshi Doshi
Jayveer Kochhar places the sensors on a mannequin

How this AI-enabled device detects early signs of rheumatoid conditions

We would have compared the invention to something from Dr House or Grey’s Anatomy, but then this writer was not sure if Jayveer Kochhar, a GenZ representative, would catch the reference. On the morning we visit him at his home in Juhu, the teenager is ready with his laptop to give us a display of his invention, Breathfree Bamboo Spine. A close-up of the internal circuitry of the sensors The story, as ever so often, begins at home. “My grandmother and uncles have been diagnosed with ankylosing spondylitis. It runs in the family,” the Juhu-based Kochhar tells us. A genetic condition, ankylosing spondylitis causes an inflammation of joints and tendons. “Often, the bones and tendons fuse with each other, even in the vertebrae. Since there is no cure, the only solution is an early diagnosis,” he elaborates. This usually involves a blood test to detect the presence of a specific gene variant, HLA-B27. “The test is expensive, and not available at all labs across India. Plus, I have noticed that people can get afraid of invasive examinations. They tend to ignore signs till it becomes very obvious,” he points out. Using mathematics in science A Grade 11 student of the Dhirubhai Ambani International school, Kochhar is a fan of mathematics and has been learning coding since he was four years old. Having learnt about the disease, he decided to use programming and mathematics to tackle the problem. “Even when the disease has not fully progressed, it can be determined by slight changes in breathing patterns. When you have the disease, your chest does not expand as much. You struggle to open up the lungs, or sit straight since the vertebrae or the bones are inflamed or fused,” Kochhar remarks. Armed with this knowledge, and after discussing his ideas with the family orthopaedic doctor, Dr Amit Joshi, the teenager set to work on 3D models in September 2023. Kochhar (right) tests the sensors on the writer’s hand. Pics/Anurag Ahire  Dr Joshi began the conversation with the teenager late in 2022. “The disease begins with the fusion of the joints. Often, the issue starts out as a typical back pain. It is only when the disease goes out of control that people often are referred to us. By then we have lost precious time. If we have a machine where we can actually see a holistic data of the expansion of our chest, we can pre-empt the possibilities. In a normal person, the expansion of the chest is around 2.5 cms. I suggested he add further inputs from the scapular area as well. The more we tested with patients, we discovered that the equipment does have an accuracy of over 90 per cent,” Dr Joshi shared.  Testing it out  As we examine the equipment, Kochhar points to five button sensors. These can be attached to removable stickers, and placed on the body. “They are cheap, non-invasive and reusable. Each of these sensors has an Internal Measurement Unit (IMU) that measures acceleration and angular velocity. They are placed along five points of the body i.e. pectoral muscles, scapula blades, sternum and the spine. You might not notice, but there is displacement of the body at a certain angle and a specific velocity when you breathe,” he explains. Worn directly on the skin, the sensors detect these movements and relay them to a microcontroller that transmits it further to a laptop using wi-fi signals. Jayveer Kochhar He places one of the sensors on our arm and asks us to move it. “The data is relayed live, and measured for a span of five or 10 minutes,” he points to the numbers on the screen. Altogether, the five sensors relay 60 data points every second. “These are analysed in two parts. The first is the pre-processing that calculates displacement through an algorithm. The second part involves the use of set values to predict possibilities,” Kochhar points out. It is in the second part that Artificial Intelligence (AI) steps in. “I used a recurrent neural network known as a long-short term memory module. The module makes observations based on its previous learnings from the gathered five-minute data sets,” he explains. Thus, when it spots a reduced displacement of an individual’s chest movement or scapula during breathing, it will highlight the anomaly. This data can be plotted on a graph, or as a numerical sheet. It can help in detecting multiple rheumatological disorders, says the Grade 11 student. Where does a 15-year-old learn these terms, we wonder? “I have always been curious. Plus, Dr Joshi kept offering me inputs through the testing period,” he says, adding that he has already shown his invention at the finals of the International Science and Engineering Fair (ISEF) in the Los Angeles, USA, earlier this year. For now, Kochhar’s inventions have been funded by the family with the scholarships and prizes adding to the purse. But the teen hopes to create this as a tool for those in need. “In villages, the condition remains undiagnosed and ignored due to the expensive tests. This is a simple tool that can be used by any GP (General Practitioner), with the set data saved on a virtual platform. That’s the eventual target,” he says.  Despite the detailed medical knowledge, Kochhar wants to pursue computer science as a career path; a field that will be altered by AI and machine learning, we remind him. “I know. People keep telling me that AI will take over human roles. But it is dependent on data. Of course, it evolves rapidly. For instance, ChatGPT has achieved 90 per cent accuracy already. But to cross that final 10 per cent requires inputs to be more complex and detailed. This is where it lacks human imagination,” the young man concludes. Till then, the future seems to be in good hands.  What is it?Breathfree Bamboo Spine is a set of wearable sensors that calculate displacement of different sets of muscles and their mobility during breathing to determine early signs of possible rheumatoid conditions. How it works?Five sensors placed on the optimum positions on an individual’s body relay data that is analysed by a machine learning programme to predict anomalies. Who it affects/benefits?The analyses can help detect problems in posture, erratic breathing patterns and even early detection of rheumatoid issues like ankylosing spondylitis and other forms of arthritis.  

26 July,2024 10:23 AM IST | Mumbai | Shriram Iyengar
This website uses cookie or similar technologies, to enhance your browsing experience and provide personalised recommendations. By continuing to use our website, you agree to our Privacy Policy and Cookie Policy. OK