Viral News

Self-Driving Cars vs. Coding Copilots

Self-Driving Cars vs. Coding Copilots

Back in the mid-2010s, the world of autonomous vehicles was making great progress, and it seemed that we would soon be ushered around in cars that drove themselves, leaving us free to spend our time how we wanted. That obviously hasn’t happened, but instead, we’ve been treated to a form of AI we weren’t expecting: generative AI-powered copilots. Following the launch of ChatGPT in late 2022, the world of generative AI has been on a tear. Every company seems to be investing in large language models (LLMs) to build one of the two most visible forms of GenAI: chatbots and…
Read More
Data Machina #252

Data Machina #252

Diffusion, FM & Pre-Trained AI models for Time-Series. DeepNN-based models are starting to match or even outperform statistical time-series analysis & forecasting methods in some scenarios. Yet, DeepNN-based models for time-series suffer from 4 key issues: 1) complex architecture 2) enormous amount of time required for training 3) high inference costs, and 4) poor context sensitivity.Latest innovative approaches. To address those issues, a new breed of foundation or pre-trained AI models for time-series is emerging. Some of these new AI models use hybrid approaches borrowing from NLP, vision/ image, or physics modelling, like: transformers, diffusion models, KANs and state space…
Read More
The Role of Synthetic Data in Cybersecurity

The Role of Synthetic Data in Cybersecurity

Data's value is something of a double-edged sword. On one hand, digital data lays the groundwork for powerful AI applications, many of which could change the world for the better. Conversely, storing so many details on people creates huge privacy risks. Synthetic data provides a possible solution. What Is Synthetic Data? Synthetic data is a subset of anonymized data – data that doesn't reveal any real-world details. More specifically, it refers to information that looks and acts like real-world data but has no ties to actual people, places or events. In short, it's fake data that can produce real results. In…
Read More
Announcing General Availability of Liquid Clustering

Announcing General Availability of Liquid Clustering

We’re excited to announce the General Availability of Delta Lake Liquid Clustering in the Databricks Data Intelligence Platform. Liquid Clustering is an innovative data management technique that replaces table partitioning and ZORDER so you no longer have to fine-tune your data layout to achieve optimal query performance.  Liquid clustering significantly simplifies data layout-related decisions and provides the flexibility to redefine clustering keys without data rewrites. It allows data layout to evolve alongside analytic needs over time – something you could never do with partitioning on Delta.  Since the Public Preview of Liquid Clustering at the Data and AI Summit last year, we’ve…
Read More
Top Data Validation Tools for Machine Learning in 2024

Top Data Validation Tools for Machine Learning in 2024

Image generated with MidjourneyIt was challenging to stop myself from starting this article with some variation of the popular phrase "garbage in, garbage out." Well, I did it anyway. But jokes aside, we can easily imagine a situation in which we have built and deployed a machine learning model (possibly a black box) that accepts some input and returns some predictions. So far, so good.However, with tons of complexity happening before the model (data preprocessing and manipulation), the model itself, and any post-processing of the outputs, many things can go wrong. And in some mission-critical fields (finance, healthcare, or security),…
Read More
Fast Stochastic Policy Gradient: Negative Momentum for Reinforcement Learning

Fast Stochastic Policy Gradient: Negative Momentum for Reinforcement Learning

arXiv:2405.12228v1 Announce Type: new Abstract: Stochastic optimization algorithms, particularly stochastic policy gradient (SPG), report significant success in reinforcement learning (RL). Nevertheless, up to now, that how to speedily acquire an optimal solution for RL is still a challenge. To tackle this issue, this work develops a fast SPG algorithm from the perspective of utilizing a momentum, coined SPG-NM. Specifically, in SPG-NM, a novel type of the negative momentum (NM) technique is applied into the classical SPG algorithm. Different from the existing NM techniques, we have adopted a few hyper-parameters in our SPG-NM algorithm. Moreover, the computational complexity is nearly same…
Read More
Focus on Low-Resolution Information: Multi-Granular Information-Lossless Model for Low-Resolution Human Pose Estimation

Focus on Low-Resolution Information: Multi-Granular Information-Lossless Model for Low-Resolution Human Pose Estimation

arXiv:2405.12247v1 Announce Type: new Abstract: In real-world applications of human pose estimation, low-resolution input images are frequently encountered when the performance of the image acquisition equipment is limited or the shooting distance is too far. However, existing state-of-the-art models for human pose estimation perform poorly on low-resolution images. One key reason is the presence of downsampling layers in these models, e.g., strided convolutions and pooling layers. It further reduces the already insufficient image information. Another key reason is that the body skeleton and human kinematic information are not fully utilized. In this work, we propose a Multi-Granular Information-Lossless (MGIL) model…
Read More
The Arabic Noun System Generation

The Arabic Noun System Generation

arXiv:2405.11014v1 Announce Type: new Abstract: In this paper, we show that the multiple-stem approach to nouns with a broken plural pattern allows for greater generalizations to be stated in the morphological system. Such an approach dispenses with truncating/deleting rules and other complex rules that are required to account for the highly allomorphic broken plural system. The generation of inflected sound nouns necessitates a pre-specification of the affixes denoting the sound plural masculine and the sound plural feminine, namely uwna and aAt, in the lexicon. The first subsection of section one provides an evaluation of some of the previous analyses of…
Read More
Fine Tuning Phi 1.5 using QLoRA on the Stanford Alpaca Dataset

Fine Tuning Phi 1.5 using QLoRA on the Stanford Alpaca Dataset

Quantized LoRA, more commonly known as QLoRA is a combination of quantization and Low Rank Adaptation for fine-tuning LLMs. Simply put, LoRa is a technique to adapt Large Language Models to specific tasks without making them forget their pretraining knowledge. In QLoRa, we load the pretrained model weights in quantized format, say 4-bit (INT4). However, the adapter (LoRA) layers are loaded in full precision, FP16 or FP32. This reduces the memory (GPU) consumption by a great extent making fine tuning possible on low resource hardware. To this end, in this article, we will be fine tuning the Phi 1.5 model…
Read More
Informatica CEO: Good Data Management Not Optional for AI

Informatica CEO: Good Data Management Not Optional for AI

(greenbutterfly/Shutterstock) The big data era may have started a decade-and-a-half ago, but for many companies, it’s the current AI revolution that’s forcing them to finally get serious about data management, says Informatica CEO Amit Walia. “What is AI without good quality data?” he says. Data is the foundation for a host of corporate efforts these days, and that realization is leading many companies to renew their interest in establishing a comprehensive data management strategy, Walia told Datanami last week in advanced of Informatica World, which takes place in Las Vegas this week. “The driver is all of these digital initiatives…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.