Data Types in Machine Learning

Data Types in Machine Learning

English Version

Machine learning works with specific data types. Each type guides how you clean data, choose models, and build features. Here are the main groups.

1. Numerical Data

Values that represent quantities. You can apply math operations on them.

  • Continuous. Any value in a range. Example height and temperature.
  • Discrete. Whole numbers. Example counts and items.

2. Categorical Data

Values that represent groups. Models need encoding to use them.

  • Nominal. No order. Example colors and cities.
  • Ordinal. Has order. Example ratings.

3. Binary Data

Values with two states. Example yes or no. Many classification models use this format.

4. Text Data

Sentences or documents. You convert text into tokens or vectors before training.

5. Image Data

Pixels stored in arrays. Used in vision tasks. Needs preprocessing and augmentation.

6. Audio Data

Waveforms or spectrograms. Used in speech tasks. Needs feature extraction.

7. Time Series Data

Values ordered by time. Used in forecasting and anomaly detection. Needs sequence handling.

8. Tabular Data

Rows and columns. Common in business and analytics. Supports mixed types.

Why This Matters

Each type needs its own preparation steps. Your model choice depends on the data format and structure.


Moroccan Darija Version

Machine learning kaykhdm b data mkhtlfa. Kol type kay7dd tariqa dyal cleaning, models, w features. Hna l types l m3rfin.

1. Numerical Data

Qiyam nssabiya. T9dr tdir 3lihom operations.

  • Continuous. Qiyam bin range. Bhal temperature.
  • Discrete. A3dad kamla. Bhal counts.

2. Categorical Data

Qiyam dyal groups. Kay7taj encoding bach ykhdm f models.

  • Nominal. Bla tartib. Bhal colors.
  • Ordinal. Fih tartib. Bhal ratings.

3. Binary Data

Qiyam b jouj states. Bhal yes w no.

4. Text Data

Nass w sentences. Khass conversion l tokens ola vectors.

5. Image Data

Pixels f arrays. Msta3mla f vision. Khass preprocessing.

6. Audio Data

Waveforms ola spectrograms. Msta3mla f speech. Khass extraction.

7. Time Series Data

Qiyam mrrtba b time. Msta3mla f forecasting.

8. Tabular Data

Rows w columns. Msta3mla f analytics. Kayjma3 types mkhtlfin.

3lach hadi mohemma

Kol type kay7taj steps mokhtlfin. Model choice kaybdel 3la hsab data format.

Share:

Where to Start in Machine Learning?

Where to Start in Machine Learning

English Version

1. Learn the Core Math

Start with linear algebra, probability, and basic calculus. You need these topics to understand models. Keep the focus on vectors, matrices, derivatives, and distributions.

2. Learn Python

Python is the main language for machine learning. Learn variables, loops, functions, and modules. Then learn NumPy and Pandas. These libraries handle data and arrays.

3. Understand Data Handling

Learn how to load data, clean data, and prepare data. Learn missing values, encoding, normalization, and splitting. Clean data improves model results.

4. Start With Supervised Learning

Begin with simple models. Learn linear regression, logistic regression, decision trees, and k nearest neighbors. Build small projects and test them.

5. Learn Model Evaluation

Learn accuracy, precision, recall, F1, RMSE, and confusion matrices. Evaluation helps you understand model performance.

6. Move to Advanced Models

After the basics, study random forests, gradient boosting, SVMs, and neural networks. These models give stronger results for many tasks.

7. Practice With Real Projects

Pick small datasets. Build simple pipelines. Train, test, and evaluate. Improve the workflow step by step.

8. Learn ML Tools

Learn scikit learn first. Then explore TensorFlow and PyTorch. These tools help you build deep learning systems.

9. Read Research and Documentation

Read official docs for libraries. Follow updates and study tutorials. This keeps your skills fresh.

10. Stay Consistent

Set a clear schedule. Build often. Test ideas. Small steps push you forward in ML.


Moroccan Darija Version

1. Bda b Math l Asasiya

Bda b linear algebra, probability, w calculus. Hadi l core bach tfham models. Sir l vectors, matrices, derivatives, w distributions.

2. T3llam Python

Python hiya l language l msta3mla f ML. T3llam basics. B3d t3llam NumPy w Pandas bach t7km f data.

3. Fham Data Handling

T3llam kifash tloadi data, tnqqiha, w tjiheziha. Fham missing values, encoding, normalization, w splitting.

4. Bda b Supervised Learning

Bda b linear regression, logistic regression, decision trees, w k nearest neighbors. Diri small projects.

5. T3llam Evaluation

Fham accuracy, precision, recall, F1, RMSE, w confusion matrix. Hadi katwerik performance.

6. Zid l Models Mtwrin

B3d basics, qra random forests, gradient boosting, SVM, w neural networks.

7. Diri Projects 3lihom Data Bssita

Khddm 3la datasets sgharin. Sna3 pipeline. Train, test, w evaluate.

8. T3llam Tools dyal ML

Bda b scikit learn. B3d sir l TensorFlow w PyTorch.

9. Qra Docs w Tutorials

Tb3a docs dyal libraries. Qra examples w updates.

10. Dwam 3la tta3allom

Dir schedule. Khddm b niyma. Qder t3awd. L consistency katbni skills.

Share:

Top 10 Tools for Data Analysis

Top 10 Tools for Data Analysis

English Version

Top 10 Tools for Data Analysis

Data analysis needs clear tools that help you explore data, clean data, and build insights. Here are ten strong options used in industry and education.

  1. Python. Flexible language with strong libraries like Pandas and NumPy. Works for cleaning, processing, and modeling.
  2. Pandas. Python library for tables and data manipulation. Supports filtering, grouping, and transformation.
  3. NumPy. Core library for numerical operations in Python. Works with arrays and scientific tasks.
  4. R. Strong language for statistics and data visualization. Popular in academia and research.
  5. SQL. Core skill for working with databases. Helps you query, filter, and join data.
  6. Excel. Common tool for fast analysis. Supports formulas, pivot tables, and charts.
  7. Tableau. Visualization tool for dashboards and reports. Helps you present insights.
  8. Power BI. Microsoft tool for interactive dashboards. Connects to many data sources.
  9. Apache Spark. Distributed system for large datasets. Helps with big data processing.
  10. Jupyter Notebook. Environment for code, text, and experiments. Supports Python and other languages.

How to Pick the Right Tool

Pick a tool based on dataset size, analysis goals, and workflow needs. Use Python or R for full control. Use Excel for quick tasks. Use Tableau or Power BI for dashboards.


Moroccan Darija Version

Top 10 Tools dyal Data Analysis

Data analysis kayhtaj tools wadhin bach t9ra data, tnqqa data, w tjib insights. Hna ashno l tools li katsst3mlhum ktar.

  1. Python. Language fleksibl. Kay3awnk b Pandas w NumPy l processing w modeling.
  2. Pandas. Library dyal Python bach t3ml m3a tables. Kay3awnk f filtering w grouping.
  3. NumPy. Library dyal operations nssabiya. Kaykhddm m3a arrays.
  4. R. Language qwiya f statistics w visualization.
  5. SQL. Tool l databases. Kay3awn f queries w joins.
  6. Excel. Tool m3ruf bach tdir analysis s3iba b sor3a. Fih formulas w pivot tables.
  7. Tableau. Tool dyal dashboards w charts.
  8. Power BI. Tool dyal Microsoft bach tdir dashboards mokhtalfin.
  9. Apache Spark. System l big data. Kay3awn f processing kbir.
  10. Jupyter Notebook. Environment bach tktb code w tdir tests.

Kif tkhatar l tool

Khatar 3la hsab size dyal data, hadf dyal analysis, w workflow. Python w R mzyanin l control. Excel mzyan l tasks s3iba w sghira. Tableau w Power BI mzyanin l dashboards.

Share:

Top 10 Tools for Web Scraping

Top 10 Tools for Web Scraping

English Version

Top 10 Tools for Web Scraping

Web scraping helps you collect data from websites in a structured way. These tools support automation, parsing, and data extraction. Here are ten strong options.

  1. BeautifulSoup. Python library that parses HTML and XML. Simple and fast for small projects.
  2. Scrapy. Python framework built for large scraping workflows. Offers spiders, pipelines, and strong control.
  3. Selenium. Automates browsers. Helps scrape dynamic pages that use JavaScript.
  4. Playwright. Modern browser automation tool. Stable, fast, and supports multiple languages.
  5. Puppeteer. Headless Chrome automation for Node.js. Strong for dynamic site scraping.
  6. Requests. Python library for HTTP calls. Works well with BeautifulSoup for simple tasks.
  7. Apify. Cloud platform for scraping and automation. Offers ready templates and scalable runs.
  8. Octoparse. No code scraping tool. Good for beginners who want fast setup.
  9. ParseHub. Visual scraper for dynamic content. Exports data in clean formats.
  10. Diffbot. AI powered extraction API. Converts pages into structured data without manual selectors.

How to Pick the Right Tool

Pick a tool based on project size, site complexity, and coding comfort. Use browser automation for dynamic pages. Use lightweight parsers for static pages. Use cloud tools for large workloads.


Moroccan Darija Version

Top 10 Tools l Web Scraping

Web scraping kay3awnk tjma3 data men sites b tariqa mrtaba. Hna ashno l tools li kaynaw w kif y3awnouk.

  1. BeautifulSoup. Library dyal Python li kat parse HTML. Sahl w mzyan l projects sgharin.
  2. Scrapy. Framework dyal Python. Kay3awn f projects kbar w workflows.
  3. Selenium. Kayhark browser w kay scrape pages li fihom JavaScript.
  4. Playwright. Tool jdid w stable. Kaykhddm b Python, Node, w languages okhrin.
  5. Puppeteer. Automation dyal Chrome l Node.js. Kaykhddm mzyan m3a dynamic sites.
  6. Requests. Library dyal Python l HTTP calls. Kaytkamal m3a BeautifulSoup.
  7. Apify. Platform cloud bach tdir scraping w automation. Kayn fih templates w scalability.
  8. Octoparse. Tool bla code. Mzyan l beginners.
  9. ParseHub. Visual scraper li kaykhddm m3a content dynamic.
  10. Diffbot. API b AI. Kayhwel pages l data mrtaba bla selectors.

Kif tkhatar l tool ssi7

Khatar l tool 3la hsab size dyal project, complexity dyal site, w comfort dyal code. Ila site dynamic st3ml browser automation. Ila static st3ml tools khfafin. Ila project kbir st3ml cloud tools.

Share:

Difference Between Pretraining and Fine Tuning in LLMs

Difference Between Pretraining and Fine Tuning in LLMs

English Version

What Is Pretraining

Pretraining builds the core knowledge of a large language model. The model learns from large text datasets. It learns grammar, facts, reasoning patterns, and general language skills. It learns next word prediction during this phase. Pretraining shapes the base model behavior.

What Is Fine Tuning

Fine tuning adjusts the pretrained model for a specific task. It uses a smaller dataset. It shapes the model toward targeted outputs like support answers, coding help, or medical text analysis. Fine tuning changes the model goals to match the task.

Main Differences

  • Goal: Pretraining builds general language ability. Fine tuning adapts this ability to a task.
  • Data size: Pretraining uses huge datasets. Fine tuning uses focused datasets.
  • Cost: Pretraining is heavy and expensive. Fine tuning is lighter.
  • Output behavior: Pretrained models respond in a generic way. Fine tuned models behave in a controlled way.
  • Training signals: Pretraining uses next token prediction. Fine tuning uses labeled data or preference data.

Why This Matters

Understanding both steps helps you design better AI systems. You know when to use a base model. You know when to fine tune for accuracy, control, or reliability.


Moroccan Darija Version

Ash kayn f Pretraining

Pretraining kaybni l asasi dyal l model. L model kayt3llam men dbara dyal nass kbira. Kayt3llam l grammar, l info, w l patterns dyal reasoning. Kayt3llam kifash ykml l kalma jaya. Pretraining kaydir l base dyal l model.

Ash kayn f Fine Tuning

Fine tuning kaywajja l model l chi task mo3ayan. Kayst3ml data sghira w mkhasssa. Kaykhelli l model yjib jawabat mkhasssin b7al support, code, ola tahlil dyal nass tibi. Fine tuning kaybdl hadaf l model bach yntawej l task.

Farqu binathom

  • Goal: Pretraining kaydir l skills l 3amma. Fine tuning kaywajja had skills l task.
  • Data: Pretraining kayst3ml data kbira. Fine tuning kayst3ml data mkhasssa.
  • Cost: Pretraining ghali. Fine tuning ahwan.
  • Behavior: Model pretrained kayjib jawab 3amm. Model fine tuned kayjib jawab mkhassas.
  • Training: Pretraining kayst3ml next token prediction. Fine tuning kayst3ml data labeled ola preference.

3lach hadi mohemma

Ila fhamti pretraining w fine tuning ghat3rf kifash tsawb systems zwinin f AI. Ghatsst3ml base model f wa9t mo3ayan. W ghatt fine tune ila bghiti accuracy w control kbar.

Share:

Feature Engineering for AI Beginners

Feature Engineering for AI Beginners

What Is Feature Engineering

Feature engineering is the process of creating inputs that help a model learn patterns. Raw data often needs transformation. Strong features improve accuracy and stability.

Why feature engineering is important

  • It improves model performance.
  • It reduces noise in the dataset.
  • It highlights useful patterns.
  • It prepares data for algorithms that expect structured inputs.

Main feature engineering actions

  • Create new columns from existing ones.
  • Encode text or categorical data.
  • Scale numerical values.
  • Normalize distributions.
  • Extract time features like hour or weekday.
  • Group rare categories.

Simple Python example

import pandas as pd

df = pd.read_csv("data.csv")

# Create a new feature
df["bmi"] = df["weight"] / (df["height"] ** 2)

# Encode category
df = pd.get_dummies(df, columns=["city"])

# Scale a value
df["age_scaled"] = (df["age"] - df["age"].mean()) / df["age"].std()

Tips for strong features

  • Use simple transformations.
  • Check correlation between new features and the target.
  • Remove features that bring no value.
  • Keep track of each transformation.
  • Test features with a baseline model.

Conclusion

Feature engineering creates clear inputs for machine learning tasks. It supports AI students and practitioners in reaching stable results. It forms a core skill in every project.


Feature Engineering b Darija

Feature engineering huwa lprocess li katsayeb inputs jdadin bach lmodel ytfham data mzyan. Data raw kats7taj tbdil. Features mqaddmin kayrfa3o performance dyal model.

Ash bhal faida dyalo

  • Kayhssen accuracy.
  • Kayn9i noise.
  • Kaybayyen patterns mhemmin.
  • Kayywajjid data l algorithms.

Steps m3roufin f feature engineering

  • Tsayeb columns jdadin.
  • Encode text w categories.
  • Scale values numeriques.
  • Normalize distributions.
  • Tsayeb time features bhal hour w weekday.
  • Tjam3 categories nqallin.

Exemple b Python

import pandas as pd

df = pd.read_csv("data.csv")

df["bmi"] = df["weight"] / (df["height"] ** 2)

df = pd.get_dummies(df, columns=["city"])

df["age_scaled"] = (df["age"] - df["age"].mean()) / df["age"].std()

Tips

  • Khdem b steps s7la.
  • Chouf correlation m3a target.
  • Hayed features li mafihom faida.
  • Sjjel kull bdl.
  • Testi features b model basique.

Khitam

Feature engineering kaywajjid inputs wadiin. Kay3awn talaba w practitioners f AI. Kayb9a skill mhemma f kol project.

Share:

Data Exploration and Data Cleaning

Data Exploration and Data Cleaning

Data Exploration and Data Cleaning

Data exploration and data cleaning form the base of every AI workflow. Clean data leads to stronger models. Exploration helps you understand structure, errors, and patterns.

What is data exploration

Data exploration checks shape, types, missing values, duplicates, and simple stats. It gives a first view of the dataset.

Key steps

  • Check dataset size.
  • Check column types.
  • Check missing values.
  • Check unique values for each column.
  • Check simple statistics like mean and median.
  • Look for outliers.

Basic Python example

import pandas as pd

df = pd.read_csv("data.csv")

print(df.head())
print(df.info())
print(df.describe())
print(df.isnull().sum())
print(df.duplicated().sum())

What is data cleaning

Data cleaning fixes errors and prepares the dataset. It includes removing duplicates, filling missing values, converting types, and handling outliers.

Common cleaning actions

  • Remove duplicates.
  • Drop irrelevant columns.
  • Fill missing values.
  • Convert string numbers to numeric types.
  • Standardize text values.
  • Handle outliers.

Cleaning example in Python

# Remove duplicates
df = df.drop_duplicates()

# Fill missing values
df["age"] = df["age"].fillna(df["age"].median())

# Convert to numeric
df["salary"] = pd.to_numeric(df["salary"], errors="coerce")

# Drop rows with invalid values
df = df.dropna()

Tips for strong preprocessing

  • Keep transformations simple.
  • Keep logs of steps.
  • Use clear column names.
  • Test the dataset after each change.
  • Store a clean version for modeling.

Conclusion

Data exploration shows structure. Data cleaning fixes it. These steps improve AI models and reduce errors. Every AI student and practitioner needs strong preprocessing skills.


Data Exploration w Data Cleaning b Darija

Data exploration w data cleaning houma l9a3da dyal kol workflow f AI. Data n9iya katsayeb models aqwa. Exploration kayb9a awwel fase bach tfham dataset.

Ash hiya data exploration

Katchouf shape, types, missing values, duplicates, w stats basita. Katt3tik nazra 3amma 3la dataset.

Steps mhemmin

  • Chouf size dyal dataset.
  • Chouf types dyal columns.
  • Chouf missing values.
  • Chouf unique values.
  • Chouf stats basita b7al mean w median.
  • Chouf outliers.

Exemple b Python

import pandas as pd

df = pd.read_csv("data.csv")

print(df.head())
print(df.info())
print(df.describe())
print(df.isnull().sum())
print(df.duplicated().sum())

Ash hiya data cleaning

Data cleaning katssayeb errors w katwajjid dataset. Katchmel delete duplicates, fill missing values, convert types, w handle outliers.

Steps dyal cleaning

  • Tri duplicates.
  • Hayed columns li mafihom faida.
  • 3mer missing values.
  • Convert types.
  • Standardize text.
  • Handle outliers.

Exemple cleaning b Python

df = df.drop_duplicates()

df["age"] = df["age"].fillna(df["age"].median())

df["salary"] = pd.to_numeric(df["salary"], errors="coerce")

df = df.dropna()

Tips

  • Khdem b steps s7lin.
  • Sjjel kull step.
  • Smi columns b klam wadi.
  • T9der ttesti dataset b3d kull taghyir.
  • Khlli version n9iya bach tbuildi models.

Khitam

Data exploration katfham dataset. Data cleaning katsla7 dataset. Had lfaslat kayrfa3o quality dyal models f AI. Talaba w practitioners kay7tajouhm f ay project.

Share:

Introduction to Matplotlib

Introduction to Matplotlib

Introduction to Matplotlib

Matplotlib is a Python library for plots. Developers use it to create charts for data analysis, machine learning, and scientific work. It gives full control over lines, labels, titles, and axes. It fits small and large projects.

Why use Matplotlib

  • It supports line charts, bar charts, scatter plots, histograms, and more.
  • It integrates with NumPy and pandas.
  • It works in Jupyter notebooks and Python scripts.
  • It offers stable output for research and industry tasks.

Basic installation

pip install matplotlib

Create your first plot

This example shows a simple line plot.

import matplotlib.pyplot as plt

x = [1, 2, 3, 4]
y = [2, 4, 6, 8]

plt.plot(x, y)
plt.title("Simple Line Plot")
plt.xlabel("X Axis")
plt.ylabel("Y Axis")
plt.show()

This code creates a window with a chart. It links each x point with each y point. You can adjust labels and style as needed.

Common plot types

  • Line plots for trends.
  • Scatter plots for relations between two variables.
  • Bar charts for comparisons.
  • Histograms for distribution checks.

Tips for clean plots

  • Use short titles.
  • Label axes with clear words.
  • Keep grid lines simple.
  • Focus on readable font sizes.

Conclusion

Matplotlib helps build clear charts with few lines of code. It supports data analysis tasks for students and professionals. It works well in AI projects because it gives quick feedback on data patterns.


Introduction dialecte darija

Matplotlib hiya library f Python li katmken men rasm charts. Tkhdem mzyan f data analysis, machine learning, w t9dir tsayeb biha charts b kontrol kammel. Katshel f notebooks w scripts.

Ash nstafdo men Matplotlib

  • Katsupport line charts, bar charts, scatter plots, w histograms.
  • Katkhddem mzyan m3a NumPy w pandas.
  • Katrun b sor3a f Jupyter.
  • Katsayeb charts stables l research w l projects.

Tansib

pip install matplotlib

Awwel plot

import matplotlib.pyplot as plt

x = [1, 2, 3, 4]
y = [2, 4, 6, 8]

plt.plot(x, y)
plt.title("Simple Line Plot")
plt.xlabel("X Axis")
plt.ylabel("Y Axis")
plt.show()

Had code kayrsam line plot basit. Katban window fiha chart. Kitrbat points dyal x m3a y. T9dar tbdel labels w style.

Ash men types

  • Line plot bach nshawfo trends.
  • Scatter plot bach nshawfo l relation bin variableat.
  • Bar chart bach nqarnu values.
  • Histogram bach nfhmou distribution.

Tips bach yban chart n9i

  • Khdem b title s7el.
  • Smi l axes b klam wadi.
  • Khelli grid s7el.
  • Khdem b font size maqbul.

Khitam

Matplotlib kay3awn bzaaf f rasm charts b sr3a. Kayfit l talaba, beginners, w professionals f AI w data. Kayb9a tool mohim f kol project kaytsana visualization.

Share:

Pandas Essentials for AI and Data Beginners

Pandas Essentials for AI and Data Beginners

Pandas Essentials for AI and Data Beginners

Pandas is the main library for data handling in Python. It gives you tools to load, clean, explore, and prepare datasets. AI and machine learning depend on clean data, and Pandas makes this process simple.

1. Importing Pandas

import pandas as pd

This is the standard import name. Always use pd.

2. Creating a DataFrame

data = {
    "name": ["Sara", "Ali", "Yassine"],
    "score": [85, 92, 78],
    "age": [21, 23, 22]
}

df = pd.DataFrame(data)
print(df)

A DataFrame is like an excel sheet. Rows and columns.

3. Loading Data From Files

  • CSV.
  • Excel.
  • JSON.
df = pd.read_csv("data.csv")
df = pd.read_excel("file.xlsx")
df = pd.read_json("info.json")

4. Inspecting Data

print(df.head())      
print(df.tail())      
print(df.info())      
print(df.describe())  
  • head. first rows.
  • info. types and null values.
  • describe. stats for numeric columns.

5. Selecting Columns

print(df["name"])
print(df[["name", "score"]])

Always use brackets for lists of columns.

6. Selecting Rows

Use iloc for index based selection. Use loc for label based selection.

print(df.iloc[0])        
print(df.iloc[0:2])      
print(df.loc[0])         

7. Filtering Rows With Conditions

high_scores = df[df["score"] > 80]
print(high_scores)

Filter two conditions.

f = df[(df["score"] > 80) & (df["age"] < 23)]
print(f)

8. Adding and Updating Columns

df["passed"] = df["score"] >= 80
print(df)

You can also update values.

df["score"] = df["score"] + 5

9. Handling Missing Data

Check missing values.

print(df.isnull().sum())

Fill missing values.

df["age"] = df["age"].fillna(df["age"].mean())

Drop rows with missing values.

df = df.dropna()

10. Sorting Data

df_sorted = df.sort_values("score", ascending=False)
print(df_sorted)

11. Grouping Data

Grouping helps with summarization.

grouped = df.groupby("age")["score"].mean()
print(grouped)

You can use many functions.

df.groupby("age").agg({"score": ["mean", "max"]})

12. Merging DataFrames

Pandas supports joins like SQL.

merged = pd.merge(df1, df2, on="id", how="inner")
  • inner.
  • left.
  • right.
  • outer.

13. Removing Columns or Rows

df = df.drop("age", axis=1)  
df = df.drop(0)              

14. Converting Data Types

df["age"] = df["age"].astype(int)

Always check types before model training.

15. Exporting Data

df.to_csv("output.csv", index=False)
df.to_excel("output.xlsx", index=False)

16. Mini Projects With Pandas

Project 1. Sales Analysis

  • Load sales CSV.
  • Group by product.
  • Compute revenue.
  • Sort by top sellers.

Project 2. Student Grades Report

  • Load student data.
  • Fill missing grades.
  • Create pass or fail column.
  • Export results.

Project 3. Data Cleaning Script

  • Load messy dataset.
  • Drop duplicates.
  • Fix types.
  • Handle missing values.

Pandas in Moroccan Darija

Pandas kay3awnk t3alj data b tariqa sahl. Kat load data. Katcleani. Katsort. Katgroupi. W kat7ddarha l machine learning.

  • df.head() bach tchouf data.
  • df["col"] bach tjib column.
  • Filtering bach tselecti rows.
  • groupby bach tdir stats.
  • merge bach tjma3 datasets.

Ila t9der tkhddem b Pandas mzyan, t9der tbni projects dial AI bla t3qid.

Conclusion

Pandas offers strong tools for data loading, cleaning, filtering, and grouping. These essentials prepare your datasets for machine learning and deep learning. Learn them well to build strong AI workflows.

Share:

NumPy Essentials Tips with Python

NumPy Essentials Tips With Python

NumPy Essentials Tips With Python

NumPy is the main library for numeric work in Python. AI, data science, and machine learning depend on it. This guide shows essential tips to help you use NumPy with a clean and fast workflow.

1. Creating Arrays

Use np.array() to create arrays.

import numpy as np

a = np.array([1, 2, 3])
b = np.array([[1, 2], [3, 4]])
  • Use lists for simple arrays.
  • Use nested lists for matrices.

2. Check Shape and Size

Shape tells the structure. Size tells the number of elements.

print(b.shape)   # (2, 2)
print(b.size)    # 4

Always check shape before model training or matrix operations.

3. Useful Array Creation Functions

  • np.zeros(). array of zeros
  • np.ones(). array of ones
  • np.arange(). range of numbers
  • np.linspace(). evenly spaced numbers
  • np.eye(). identity matrix
z = np.zeros((3, 3))
r = np.arange(0, 10, 2)

4. Indexing and Slicing

Indexing helps you access elements. Slicing helps you extract parts of arrays.

x = np.array([10, 20, 30, 40, 50])

print(x[0])       
print(x[1:4])     
print(x[:3])      
print(x[::2])     

Indexing in 2D

m = np.array([[5, 6], [7, 8], [9, 10]])

print(m[0, 1])    
print(m[:, 0])    
print(m[1:, :])   

5. Vectorized Operations

NumPy operations apply to all elements. No need for manual loops.

a = np.array([1, 2, 3])
b = np.array([4, 5, 6])

print(a + b)
print(a * b)
print(a ** 2)

Vectorization improves speed and simplifies code.

6. Matrix Operations

Matrix work is common in AI.

A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])

print(A.dot(B))
print(np.matmul(A, B))
  • dot and matmul perform matrix multiplication.

7. Broadcasting

Broadcasting lets NumPy combine arrays with different shapes when possible.

a = np.array([1, 2, 3])
b = 2

print(a * b)      

This reduces code and increases speed.

8. Boolean Masking

Masking filters arrays using conditions.

x = np.array([3, 8, 1, 9, 4])

mask = x > 5
print(x[mask])    

Useful for cleaning data or selecting features.

9. Aggregation Functions

NumPy offers fast summary tools.

arr = np.array([2, 4, 6, 8])

print(arr.sum())
print(arr.mean())
print(arr.min())
print(arr.max())
print(arr.std())

10. Reshape Arrays

Reshape changes array form without changing values.

v = np.array([1, 2, 3, 4, 5, 6])
m = v.reshape(2, 3)

Reshape is important for neural networks, CNNs, and model inputs.

11. Stacking and Concatenation

Stack arrays vertically or horizontally.

a = np.array([1, 2, 3])
b = np.array([4, 5, 6])

print(np.vstack([a, b]))
print(np.hstack([a, b]))

12. Random Module

NumPy includes tools for random generation.

np.random.seed(0)
print(np.random.rand(3))
print(np.random.randint(0, 10, size=5))

Useful for model testing and reproducibility.

13. Performance Tips

  • Replace loops with vectorized operations.
  • Use astype() to control data types.
  • Use copy() when needed to avoid shared memory issues.
  • Always confirm shapes before matrix ops.

14. Mini Projects With NumPy

Project 1. Normalize a Dataset

  • Load numeric array.
  • Subtract mean.
  • Divide by standard deviation.

Project 2. Implement Simple Distance Function

  • Create two arrays.
  • Compute Euclidean distance using vector operations.

Project 3. Build a Small Matrix Calculator

  • Take two arrays.
  • Perform multiply, add, subtract, and transpose.

NumPy Essentials in Moroccan Darija

NumPy kay3awnk tkhddem b arrays b speed kbir. F AI daba, NumPy howa base. Had tips kay3awnouk tebni code n9i w rapide.

  • np.array() bach tsawb arrays.
  • .shape bach tchouf structure.
  • Indexing bach taccessi values.
  • Vectorization bach tkhadam bla loops.
  • Matrix ops bach tdir multiplication.
  • Masking bach tfiltri data.
  • Reshape bach tbadal form.

Ila fhamti had essentials, t9der tebni models w data pipelines bla t3qid.

Conclusion

NumPy offers fast numeric work for AI and data tasks. Learn arrays, shapes, indexing, vectorization, and matrix operations. These tips build strong skills for machine learning, neural networks, and data science projects.

Share:

Tutorial in Python Basics

Tutorial in Python Basics

Python Basics for AI Beginners

Python is the main language for AI and data. If you want to learn machine learning or deep learning, you need strong Python basics. This guide stays simple and clear. You can follow it even if you start from zero.

1. What Is Python and Why Do We Use It in AI

  • Python has clean and readable syntax.
  • Big AI ecosystem. NumPy, Pandas, Matplotlib, PyTorch, TensorFlow.
  • Large community and many examples.
  • Works on Windows, Linux, and macOS.

Python lets you focus on ideas instead of complex syntax. That helps a lot when you start with AI concepts.

2. Installing Python

Steps

  • Go to the official Python website.
  • Download the latest stable version of Python 3.
  • Install and check the option to add Python to PATH.
  • Open a terminal or command prompt.
  • Run this command.
python --version

If you see a version number, Python is ready.

3. Your First Python Program

Open a file called hello.py and write.

print("Hello AI world")

Run it.

python hello.py

You see the message in your terminal. This is your first step toward AI coding.

4. Python Syntax Basics

4.1 Indentation

Python uses spaces to mark blocks. No braces. Indentation is important.

if 5 > 3:
    print("True")

Use four spaces per level. Keep the style consistent.

4.2 Comments

Use comments to explain code.

# This is a comment
print("Hello")  # Inline comment

5. Variables and Data Types

5.1 Creating Variables

x = 10
name = "Sara"
pi = 3.14
is_ai_fun = True

Python detects the type from the value.

5.2 Main Data Types

  • int. whole numbers.
  • float. decimal numbers.
  • str. text strings.
  • bool. True or False.
  • list. ordered collection.
  • tuple. fixed ordered collection.
  • dict. key value pairs.

5.3 Lists

numbers = [1, 2, 3, 4]
print(numbers[0])      # First element
numbers.append(5)      # Add element

5.4 Dictionaries

student = {
    "name": "Sara",
    "age": 21,
    "field": "AI"
}

print(student["name"])

6. Operators

6.1 Arithmetic Operators

a = 10
b = 3

print(a + b)
print(a - b)
print(a * b)
print(a / b)
print(a // b)   # Integer division
print(a % b)    # Remainder
print(a ** b)   # Power

6.2 Comparison Operators

x = 5
print(x == 5)
print(x != 5)
print(x > 3)
print(x < 10)

6.3 Logical Operators

age = 20
is_student = True

print(age > 18 and is_student)
print(age < 18 or is_student)
print(not is_student)

7. Control Flow

7.1 if, elif, else

score = 82

if score >= 90:
    print("Grade A")
elif score >= 80:
    print("Grade B")
else:
    print("Grade C or lower")

7.2 for Loop

for i in range(5):
    print(i)

Loop through a list.

names = ["Ali", "Sara", "Amine"]

for n in names:
    print(n)

7.3 while Loop

count = 0

while count < 3:
    print("Count:", count)
    count += 1

8. Functions

8.1 Defining Functions

def greet(name):
    print("Hello", name)

greet("Sara")

8.2 Functions With Return

def add(a, b):
    return a + b

result = add(3, 4)
print(result)

Functions help you reuse code and keep projects clean.

9. Modules and Packages

9.1 Importing Modules

import math

print(math.sqrt(16))

9.2 From Import

from math import sqrt

print(sqrt(25))

Modules hold functions and classes. You import them instead of writing everything in one file.

10. Working With Files

# Writing
with open("data.txt", "w") as f:
    f.write("AI is fun")

# Reading
with open("data.txt", "r") as f:
    content = f.read()
    print(content)

File handling is important for data projects. You load datasets and store results.

11. Virtual Environments

Virtual environments keep project packages isolated.

Steps

python -m venv venv

Activate.

  • Windows.
venv\Scripts\activate
  • Linux or macOS.
source venv/bin/activate

After that install packages.

pip install numpy pandas matplotlib

12. First AI Oriented Libraries

12.1 NumPy

NumPy handles arrays and numeric operations.

import numpy as np

arr = np.array([1, 2, 3])
print(arr * 2)

12.2 Pandas

Pandas handles tables and structured data.

import pandas as pd

data = {
    "name": ["Sara", "Ali"],
    "score": [90, 85]
}

df = pd.DataFrame(data)
print(df)

12.3 Matplotlib

Matplotlib draws charts.

import matplotlib.pyplot as plt

x = [1, 2, 3]
y = [2, 4, 6]

plt.plot(x, y)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Simple line")
plt.show()

These three libraries form a strong base for AI and data science.

13. Mini Projects for Python Basics

Project 1. Simple Calculator

  • Ask the user for two numbers.
  • Ask for an operation. plus, minus, multiply, divide.
  • Use if statements.
  • Print the result.

Project 2. Student Grades Report

  • Store students in a list of dictionaries.
  • Each dictionary holds name and grade.
  • Loop over the list and print a report.
  • Find the highest grade.

Project 3. Simple Data Summary With Pandas

  • Create a CSV file with product, price, and quantity.
  • Load it with Pandas.
  • Compute total revenue.
  • Show basic statistics.

These projects connect Python basics with small data tasks. That prepares you for machine learning projects.

14. Typical Errors and How to Avoid Them

  • IndentationError. fix spaces and keep alignment.
  • NameError. check spelling of variable names.
  • TypeError. check types before operations.
  • ModuleNotFoundError. install packages with pip and check environment.

Read error messages. They show the line and the type of error. This skill is important for AI work.

15. Next Steps After Python Basics

  • Learn more about NumPy and Pandas.
  • Study data visualization.
  • Start with machine learning. scikit learn.
  • Move to deep learning. PyTorch or TensorFlow.

Python basics open the door. AI libraries build on these concepts.


Python Basics in Moroccan Darija

Db l AI kaybda m3a Python. Ila fhamti Python basics, t9der tdkhl l machine learning w deep learning bla t3qid.

1. 3lach Python

  • Syntax sahl.
  • Libraries kbar. NumPy, Pandas, Matplotlib, PyTorch, TensorFlow.
  • Community kbira w resources ktr.

2. Awel Code

print("Salam mn Python")

Run code w shof output f terminal. Hadi bidaya dyalek.

3. Variables

age = 22
name = "Yassine"
is_student = True

Python kayfham type mn value.

4. Lists w Dicts

numbers = [1, 2, 3]
student = {"name": "Aya", "score": 95}

Lists bach t7t data mratba. Dicts bach t7t key value.

5. Loops w Conditions

if age >= 18:
    print("Baligh")
else:
    print("Sghir")

for n in numbers:
    print(n)

6. Functions

def greet(name):
    print("Salam", name)

greet("Aya")

Functions kat7afdk code w kat3awnk f projects kbira.

7. Libraries dial Data

import numpy as np
import pandas as pd

NumPy l arrays. Pandas l tables. Hadchi howa base dial AI.


Conclusion

Python basics form the first step toward AI. Learn syntax, variables, data types, loops, and functions. Use NumPy, Pandas, and Matplotlib for data. Build small projects. With these tools, you can start machine learning and deep learning with confidence.

Share:

NLP Tasks in Artificial Intelligence

NLP Tasks in Artificial Intelligence

NLP Tasks

NLP tasks focus on understanding, processing, and generating human language. These tasks help machines read, classify, translate, and reason with text.

1. Text Classification

Assigns a label to text.

  • Spam detection
  • Sentiment analysis
  • Topic classification

2. Named Entity Recognition

Finds entities inside text. Entities include names, places, dates, and organizations.

3. Part of Speech Tagging

Labels each word with its grammatical role. Example. noun, verb, adjective.

4. Text Generation

Creates new text from input.

  • Chatbots
  • Story generation
  • Email drafting

5. Machine Translation

Converts text from one language to another.

  • English to French
  • Arabic to English
  • Spanish to German

6. Question Answering

Answers questions using context or knowledge.

  • Reading comprehension
  • Search engines
  • Chat assistants

7. Summarization

Produces a short version of text.

  • Extractive summarization
  • Abstractive summarization

8. Text Similarity

Measures how close two texts are in meaning.

  • Duplicate detection
  • Paraphrase recognition
  • Recommendation

9. Speech to Text

Converts audio speech into text.

10. Text to Speech

Converts text into natural speech.

11. Coreference Resolution

Finds which words refer to the same entity. Example. “Sara said she will come”. She refers to Sara.

12. Relation Extraction

Finds relationships between entities in text.

13. Dialogue Systems

Handles conversation with users. Task includes intent detection and response generation.

14. Information Extraction

Pulls structured information from unstructured text.

15. Sentiment and Emotion Analysis

Detects feelings expressed in text.

NLP Tasks in Moroccan Darija

NLP tasks hiyya l mohimat li kat3awn machine tfham l bnat mssaj bash t3alj wahed l text ola tjiwbo.

Examples

  • Classification. sentiment, spam.
  • NER. t3raf names w places.
  • Translation. tarjama men language l language.
  • Summarization. tlakhis.
  • QA. jawb 3la soual.
  • Speech to text.
  • Text generation.

Conclusion

NLP tasks cover classification, extraction, generation, and understanding. These tasks support many AI systems such as chatbots, search engines, and translation tools.

Share:

Word Embedding and Word Vectors in AI

Word Embedding and Word Vectors in AI

Word Embedding and Word Vectors

Word embeddings or word vectors are numeric representations of words. They turn text into numbers that models can understand. Words with similar meaning get vectors that are close in space.

Why Word Embeddings Matter

  • Convert text into numeric form
  • Capture meaning and relationships
  • Improve NLP model performance
  • Reduce dimensionality compared to one hot encoding

How Word Embeddings Work

  • Each word becomes a vector of continuous values.
  • Values carry semantic information.
  • Vectors place related words close together.

Popular Embedding Methods

1. Word2Vec

Uses CBOW or Skip Gram to learn embeddings from context.

2. GloVe

Learns vectors from global statistics of word co occurrence.

3. FastText

Uses subword information. Helps with rare and misspelled words.

4. Contextual Embeddings

Generated by models like BERT. Same word can have different vectors depending on context.

Types of Word Embeddings

Static Embeddings

Each word has one fixed vector. Example. Word2Vec, GloVe.

Contextual Embeddings

Word meaning changes based on sentence. Example. BERT, GPT.

Example of Meaning in Vector Space

In a good embedding space:

  • king minus man plus woman gives queen
  • walk and walking stay close
  • happy and joyful stay close

Benefits of Word Embeddings

  • Compact representation
  • Semantic meaning captured
  • Better model accuracy

Limitations

  • Static embeddings ignore context
  • May capture dataset bias

Word Embeddings in Moroccan Darija

Word embeddings hiyya tariqa bach n7awlo words l vectors. Had vectors kay7mlo meaning. Words li kayn f same context kayjiw qrabin f vector space.

Examples

  • Word2Vec f context learning.
  • GloVe f global statistics.
  • FastText f subwords.
  • BERT f contextual meaning.

Nqat Sahl

  • Text kaywlli numbers.
  • Meaning kayban f vectors.
  • Models kayfhamo text b7al data numeric.

Conclusion

Word embeddings turn text into meaningful vectors. They support most NLP systems. They help models understand relationships between words with strong accuracy.

Share:

Loss Functions, Training Loss, Validation Loss

Loss Functions, Training Loss, Validation Loss

Loss Functions, Training Loss, Validation Loss

Loss functions measure how far model predictions are from the correct outputs. They guide the training process. Lower loss means better performance.

What Is a Loss Function

A loss function gives the model a numeric score. This score shows how wrong the prediction is. Training tries to reduce this score.

Common Loss Functions

1. Mean Squared Error

Used in regression. Measures squared distance between predicted and true values.

2. Cross Entropy Loss

Used in classification. Measures how well predicted probabilities match true labels.

3. Hinge Loss

Used in SVM tasks. Pushes correct classes away from the margin.

4. MAE

Mean Absolute Error. Measures absolute difference between prediction and truth.

What Is Training Loss

Training loss is the loss measured on the training dataset. It shows how well the model learns from the data it sees during training.

What Is Validation Loss

Validation loss is the loss measured on unseen data. This data is not used for training. It checks if the model generalizes well.

Why Training and Validation Loss Matter

  • Training loss shows learning progress.
  • Validation loss shows generalization.
  • Low training loss with high validation loss means overfitting.
  • High training and validation loss means underfitting.

How To Use Loss Curves

  • Track training loss each epoch.
  • Track validation loss each epoch.
  • Stop training when validation loss stops improving.
  • Adjust model size or regularization based on trends.

Loss Functions in Moroccan Darija

Loss function hiyya score kat9is sh7al prediction b3id 3la truth. Training y7awel y9allel had score.

Training Loss

Loss f data li model kayt3llam biha.

Validation Loss

Loss f data li model ma chafhach. Katban wach model kay3mmem mzyan.

Nqat Sahl

  • Ila training loss hbat w validation loss tla, 3ndna overfitting.
  • Ila juj high, 3ndna underfitting.

Conclusion

Loss functions measure model errors. Training loss tracks learning. Validation loss tracks generalization. Strong models keep both losses low and stable.

Share:

Fine Tuning vs Parameter Efficient Fine Tuning

Fine Tuning vs Parameter Efficient Fine Tuning

Fine Tuning vs Parameter Efficient Fine Tuning

Fine tuning and Parameter Efficient Fine Tuning are two methods used to adapt large models to new tasks. Both change model behavior, but they use different training strategies.

What Is Fine Tuning

Fine tuning updates all model weights. You start from a pretrained model and train it on a new dataset. This creates a strong task specific model.

How It Works

  • Load a pretrained model.
  • Train on task data.
  • Update all parameters.
  • Save the full model.

Strengths

  • High performance
  • Full control over the model
  • Strong adaptation to the new domain

Limitations

  • High compute cost
  • Large memory needs
  • Slow training

What Is Parameter Efficient Fine Tuning

Parameter Efficient Fine Tuning updates only a small part of the model. The rest stays frozen. This reduces training cost and memory needs.

Common PEFT Methods

  • LoRA
  • Adapters
  • Prefix tuning
  • Prompt tuning

How PEFT Works

  • Freeze the main model.
  • Add small trainable modules.
  • Train only these small modules.
  • Keep the core weights untouched.

Strengths

  • Low compute
  • Fast training
  • Small storage size

Limitations

  • Lower flexibility than full fine tuning
  • Task performance can depend on tuning type

Main Differences

Aspect Fine Tuning PEFT
Updated Parameters All weights Small modules
Compute Cost High Low
Memory Need Large Small
Flexibility High Medium
Use Case Large training budgets Lightweight adaptation

When To Use Each Method

Use Fine Tuning When

  • You have strong compute resources
  • You need full control
  • You target maximum accuracy

Use PEFT When

  • You have limited compute
  • You want fast experimentation
  • You need small and portable models

Fine Tuning vs PEFT in Moroccan Darija

Fine tuning kayupdate l model kaml. PEFT kayupdate ghir parte sghira men l model. L hadaf howa nkhdmo b cost w memory sghirin.

Fine Tuning

  • Kayupdate kolchi.
  • Performance qaouiya.
  • Cost kbir.

PEFT

  • Kayfreeze l model.
  • Kayzid modules sgharin.
  • Training sahl w rapide.

Conclusion

Fine tuning changes all weights for maximum control. Parameter Efficient Fine Tuning updates small modules to save compute. Both methods help you adapt models to new tasks with clear benefits.

Share:

Activation Functions in Deep Learning

Activation Functions in Deep Learning

Activation Functions in Deep Learning

Activation functions control how neurons respond inside a neural network. They add non linear behavior. This helps the model learn complex patterns.

Why Activation Functions Are Important

  • Help networks learn non linear relations
  • Guide gradient flow
  • Control output ranges
  • Improve training stability

Common Activation Functions

1. Sigmoid

Output stays between zero and one. Useful for binary classification.

Pros

  • Clear output range
  • Good for probability outputs

Cons

  • Slow gradients
  • Vanishing gradients in deep networks

2. Tanh

Output stays between minus one and one. Stronger signal range than sigmoid.

Pros

  • Zero centered output
  • Better gradients than sigmoid

Cons

  • Still suffers from vanishing gradients

3. ReLU

Returns zero for negative inputs and the input value for positive ones. ReLU is widely used in deep networks.

Pros

  • Fast computation
  • Strong gradient flow
  • Supports deep architectures

Cons

  • Dead neurons if weights push many values below zero

4. Leaky ReLU

Solves ReLU dead neuron issue by giving a small slope for negative inputs.

Pros

  • Less dead neurons
  • Stable gradients

Cons

  • Extra hyperparameter for slope

5. Softmax

Turns outputs into probabilities that sum to one. Used in multi class classification.

Pros

  • Clear probability distribution
  • Strong for classification

Cons

  • Sensitive to large values

6. GELU

A smooth activation used in transformer models. Offers stable and flexible behavior.

Pros

  • Better performance in modern networks
  • Smoother than ReLU

Cons

  • More compute than ReLU

How To Choose an Activation Function

  • Use ReLU or GELU for most deep models
  • Use sigmoid for binary output
  • Use softmax for multi class output
  • Use tanh in networks that need centered values

Activation Functions in Moroccan Darija

Activation functions kaykhdmo bach yzido non linearity f neural network. Hadi katkhlli model y9der y3rf patterns m3a9din.

Sigmoid

Output bin zero w one. Mzyan f binary classification.

Tanh

Output bin minus one w one. Zero centered.

ReLU

Zero ila input negative. Input ila positive. Sahl w rapide.

Leaky ReLU

Kayssir 3la problem dial dead neurons b slope sghir f negative.

Softmax

Kaydiro probabilities f classification multi class.

Conclusion

Activation functions guide how networks learn. They shape gradients, outputs, and training stability. Choosing the right one strengthens model performance.

Share:

Xavier Initialization in Neural Networks

Xavier Initialization in Neural Networks

Xavier Initialization

Xavier initialization is a method for setting initial weights in neural networks. It helps keep activations stable across layers. This improves training speed and reduces exploding or vanishing values.

Why Weight Initialization Matters

  • Controls activation scale
  • Prevents exploding values
  • Prevents vanishing values
  • Improves convergence

Core Idea of Xavier Initialization

The method sets weights so that the variance of activations stays balanced between layers. This balance supports smooth forward and backward passes.

How Xavier Initialization Works

Xavier uses the number of input and output units to scale weights. It targets equal variance at each layer.

Formula for Xavier Uniform

Weights are sampled from a uniform range.

Range equals:

plus or minus sqrt(6 divided by (fan in plus fan out))

Formula for Xavier Normal

Weights follow a normal distribution with variance:

2 divided by (fan in plus fan out)

When To Use Xavier Initialization

  • Networks with sigmoid activation
  • Networks with tanh activation
  • Shallow and deep feedforward networks

Benefits

  • Stable gradients
  • Smoother optimization
  • Faster training

Limitations

  • Not ideal for ReLU based networks
  • Better options exist for ReLU such as He initialization

Xavier Initialization in Moroccan Darija

Xavier initialization hiyya tariqa bach nbadlou weights f neural network b tariqa mizan. L hadaf howa n7afdo 3la variance mzwna bin layers.

Kif Kaykhddam

  • Kays7ab fan in w fan out.
  • Kays9es weights b uniform ola normal distribution.
  • Kayssa3ed bach gradients yb9aw mizan.

Mfad

  • Training swel.
  • No vanishing.
  • No exploding.

Conclusion

Xavier initialization sets balanced weights at the start of training. It stabilizes activations and gradients. It supports strong performance in networks that use sigmoid or tanh activations.

Share:

Learning Paradigms in Data and Machine Learning

Learning Paradigms in Data and Machine Learning

Learning Paradigms in Data

Learning paradigms describe how models learn from data. Each paradigm uses a different setup. The goal is to choose the right method based on the problem and the type of data you have.

1. Supervised Learning

Supervised learning uses labeled data. Each input has an output. The model learns the link between them.

Examples

  • Spam detection
  • Image classification
  • Price prediction

2. Unsupervised Learning

Unsupervised learning uses data without labels. The model finds patterns or structure.

Examples

  • Clustering
  • Dimensionality reduction
  • Anomaly detection

3. Semi Supervised Learning

Semi supervised learning uses a mix of labeled and unlabeled data. The model uses labeled data to guide its learning.

Examples

  • Text classification with small labeled sets
  • Image labeling with limited annotation

4. Self Supervised Learning

Self supervised learning builds labels from the data itself. The model learns from internal signals inside the dataset.

Examples

  • Masked word prediction
  • Image patch prediction

5. Reinforcement Learning

Reinforcement learning uses interaction. An agent takes actions, gets rewards, and improves its policy.

Examples

  • Robotics
  • Game playing
  • Navigation systems

6. Transfer Learning

Transfer learning takes a model trained on one task and adapts it to another task.

Examples

  • Using a pretrained CNN for new image datasets
  • Using a pretrained transformer for text tasks

7. Online Learning

Online learning updates the model step by step as new data flows in. It handles streaming data.

Examples

  • Real time recommendations
  • Live anomaly detection

Learning Paradigms in Moroccan Darija

Learning paradigms hiyya tariq dial t3llam models mn data. Kul paradigm kayst3mel style mokhtalef.

Supervised

Data m3a labels. Model kayt3llam link.

Unsupervised

Data bla labels. Model kayjber patterns.

Semi Supervised

Mxoj dial labeled w unlabeled.

Self Supervised

Model kayt3llam mn data b labels mkhlouqin mn data.

Reinforcement

Agent kaydir actions w kayakhod reward.

Transfer

Training m task o usage f task okhra.

Online

Model kayupdate m3a data jdid.

Conclusion

Learning paradigms offer different ways to train models. Each paradigm serves a specific type of data and task. Understanding them helps you choose the right method for your project.

Share:

Regularization in Machine Learning

Regularization in Machine Learning

Regularization

Regularization is a method that reduces overfitting. It keeps the model simple and stable. It controls how large the model weights grow during training.

Why Regularization Is Important

  • Stops overfitting
  • Improves generalization
  • Prevents models from memorizing noise

How Regularization Works

The model adds a penalty to the loss function. This penalty pushes the weights to stay small. Small weights create smoother decision boundaries.

Main Types of Regularization

1. L1 Regularization

L1 adds the sum of absolute weights to the loss. It pushes some weights to zero. This creates sparse models.

2. L2 Regularization

L2 adds the sum of squared weights to the loss. It keeps weights small and stable. It is used in most ML and DL tasks.

3. Dropout

Dropout turns off random neurons during training. This forces the network to learn stronger patterns.

4. Early Stopping

Training stops when validation error stops improving. This avoids learning noise.

When To Use Regularization

  • When the model overfits
  • When training data is small
  • When the model is too complex

Regularization in Deep Learning

  • Dropout layers
  • L2 weight decay
  • Batch normalization

Regularization in Moroccan Darija

Regularization hiya tariqa katsayad model bach ma yoverfitich. Kat7dd men l weights w katkhlli model y3mmem mzyan.

Types

  • L1. Kay7tt absolute weights f loss.
  • L2. Kay7tt squared weights f loss.
  • Dropout. Kaytfi neurons f training.
  • Early stopping. Kaywaqf training mlli validation t9ef.

Conclusion

Regularization reduces overfitting. It keeps models clean, stable, and reliable. It forms a core part of modern machine learning.

Share:

Data Imbalance in Machine Learning

Data Imbalance in Machine Learning

Data Imbalance

Data imbalance happens when one class in a dataset has far more samples than another. The model learns more from the dominant class. This creates weak predictions for the minority class.

Why Data Imbalance Is a Problem

  • The model focuses on the majority class.
  • The model ignores rare cases.
  • Accuracy becomes misleading.
  • Predictions lose fairness.

Common Examples

  • Fraud detection. Fraud cases are few.
  • Medical diagnosis. Rare diseases appear with low frequency.
  • Spam detection. Spam or ham counts differ.

Effects of Data Imbalance

  • High accuracy with poor real performance
  • Biased model outputs
  • Weak recall on minority class

Ways to Handle Data Imbalance

1. Undersampling

Reduce samples in the majority class.

2. Oversampling

Increase samples in the minority class by duplication.

3. SMOTE

Create synthetic samples for the minority class.

4. Class Weighting

Give higher weight to minority samples during training.

Evaluation Tips

  • Use precision and recall.
  • Use F1 score.
  • Use confusion matrix.

Data Imbalance in Moroccan Darija

Data imbalance kaykoun mlli class wahed kayn b quantidade kbira w class okhor kayn b quantidade sghira. Model kayt3llam aktar men class l kbir w kaytghafel class sghir.

L Moshkil

  • Model kayfocus 3la majority.
  • Minority kaywalou weak.
  • Accuracy katban mzyana bsah reality la.

L Hal

  • Undersampling.
  • Oversampling.
  • SMOTE.
  • Class weighting.

Conclusion

Data imbalance creates biased models. Fixing it improves fairness and prediction quality.

Share:

Data Sampling in Machine Learning

Data Sampling in Machine Learning

Data Sampling

Data sampling is the process of selecting a smaller part of a dataset. The goal is to analyze or train models without using the full data. The sample must represent the main dataset.

Why Data Sampling Is Important

  • Reduces compute time
  • Speeds up testing and experiments
  • Handles large datasets
  • Improves workflow when data is hard to process

Types of Data Sampling

1. Random Sampling

Select items at random. Each item has an equal chance of being chosen.

2. Stratified Sampling

Split data into groups called strata. Take samples from each group. This keeps proportions stable.

3. Systematic Sampling

Select every k th item from a list.

4. Cluster Sampling

Split data into clusters. Pick some clusters and analyze all items in them.

Sampling in Machine Learning

  • Used to balance datasets
  • Used to handle imbalanced classes
  • Used to reduce dataset size
  • Used to speed training

Balancing Methods

Undersampling

Remove samples from the majority class.

Oversampling

Add or duplicate samples from the minority class.

SMOTE

Create synthetic samples for the minority class.

Challenges

  • Bad samples cause bias
  • Small samples reduce accuracy
  • Stratification may be required for fairness

Data Sampling in Moroccan Darija

Data sampling howa ikhraj chi parte sghira men dataset kbir. Kankhdmo biha bach ntestiw models w nser3o l process.

Types

  • Random. Ikhtiyar random.
  • Stratified. Kankhsmo data l groups w kandiro sample men kol group.
  • Systematic. Kandiro selection kola k step.
  • Cluster. Kandiro clusters w kankhtaro chi clusters kamlin.

F ML

  • Balancing.
  • Reduction.
  • Speed training.

Conclusion

Data sampling helps you work with large datasets. It reduces cost, speeds testing, and supports balanced machine learning tasks.

Share:

Agent in AI

Agent in AI

Agent in AI

An agent in AI is a system that observes its environment, makes decisions, and takes actions. It follows a cycle that connects perception with action. Its goal is to achieve a target defined by its designer.

Core Idea

The agent senses the environment, processes the information, and selects the best action. It repeats this loop to improve behavior.

Main Components

1. Environment

The world where the agent operates. It provides inputs and reacts to actions.

2. Sensors

Tools that let the agent read the environment. For software agents, sensors can be text, data, or system states.

3. Actuators

Tools that allow the agent to take actions. For software agents, actuators can be outputs, messages, or API calls.

4. Policy

A rule that guides the agent. It determines actions for each state.

How an AI Agent Works

  • Observe the current state.
  • Process information.
  • Select an action based on the policy.
  • Execute the action.
  • Receive feedback or new state.

Types of AI Agents

1. Simple Reflex Agents

React to the current state. No memory.

2. Model Based Agents

Use internal state to track the environment.

3. Goal Based Agents

Choose actions that help reach a target.

4. Utility Based Agents

Evaluate outcomes and pick actions with high utility.

5. Learning Agents

Improve behavior through experience.

Where AI Agents Are Used

  • Recommendation systems
  • Autonomous vehicles
  • Game characters
  • Automation tools
  • Business decision systems

Strengths of AI Agents

  • Adaptive behavior
  • Continuous interaction
  • Support complex tasks

Limitations

  • Depend on quality of environment signals
  • Need strong policies
  • Hard tasks require advanced models

Agent in Moroccan Darija

Agent f AI huwa nizaam kaychouf environment, kayfker, w kaydir action. Kay3awd had loop bash ywasal goal.

Components

  • Environment. Dunia li kaykhdem fiha agent.
  • Sensors. Kayjma3o info.
  • Actuators. Kaydir actions.
  • Policy. Qanoun dyal choices.

Kif Kaykhddam

  • Kaychouf state.
  • Kay7seb shno ydir.
  • Kaydir action.
  • Kayakhod feedback.

Types

  • Simple reflex.
  • Model based.
  • Goal based.
  • Utility based.
  • Learning agent.

Conclusion

An AI agent observes, decides, and acts. It follows a loop that links environment signals with actions. Agents support many real systems in AI and automation.

Share:

Mixture of Experts in AI

Mixture of Experts in AI

Mixture of Experts in AI

Mixture of Experts is a model design that uses several expert networks. Each expert handles part of the input. A gating network decides which expert should process each token or sample. This improves scale and efficiency.

Core Idea

Instead of one large model, MoE uses many experts. Only some experts activate for each input. This reduces compute while keeping model capacity high.

How Mixture of Experts Works

  • The model receives an input.
  • The gating network scores experts.
  • The model selects a few experts with high scores.
  • The input passes through selected experts.
  • The outputs combine into one final result.

Key Components

1. Experts

Each expert is a small neural network. Experts learn different patterns. They specialize during training.

2. Gating Network

The gate chooses which experts to activate. It uses softmax or top k routing.

3. Router

The router directs tokens to experts. Good routing improves quality and efficiency.

Why MoE Models Help

  • Increase capacity without increasing compute for each token
  • Improve specialization between experts
  • Scale to large tasks

Popular MoE Approaches

Switch Transformer

Uses one expert per token. Routing stays simple and fast.

GShard

Uses distributed experts across devices. Supports large scale training.

Sparse MoE Layers

Only some experts activate. This keeps training efficient.

Challenges

  • Balancing load across experts
  • Training stability
  • Routing complexity

Use Cases

  • Large language models
  • Multimodal systems
  • Machine translation
  • Vision language tasks

Mixture of Experts in Moroccan Darija

Mixture of Experts howa model li kayst3mel bzzaf dial experts. Kul expert kayt3llam pattern mokhtalef. Gating network kaykhtar shkon khaso ykhddm m3a input.

Kif Kaykhddam

  • Input kaydkhl.
  • Gate kaydir scoring.
  • Model kaykhtar experts b scoring kbir.
  • Experts kay3aljo input.
  • Outputs kaysslafo f result wahd.

Mfad

  • Capacity kbar b compute sghir.
  • Specialization dial experts.
  • Scale mzyan.

Moshkilat

  • Load balancing.
  • Training stability.
  • Routing.

Conclusion

Mixture of Experts increases model capacity with efficient compute. It uses routing and specialized experts to produce strong results in modern AI.

Share:

Reinforcement Learning in Machine Learning

Reinforcement Learning in Machine Learning

Reinforcement Learning

Reinforcement learning is a machine learning method where an agent learns by interacting with an environment. The agent takes actions, receives rewards, and improves its strategy over time.

Core Idea

The agent learns a policy. The policy tells the agent which action to take in each state. The goal is to maximize long term reward.

How Reinforcement Learning Works

  • The agent observes a state.
  • The agent picks an action.
  • The environment returns a reward and a new state.
  • The agent updates its policy based on the reward.
  • The cycle repeats until the policy improves.

Main Components

1. Agent

The learner that chooses actions.

2. Environment

The world where the agent acts.

3. State

The current situation.

4. Action

The decision taken by the agent.

5. Reward

The feedback that guides learning.

6. Policy

The rule for selecting actions.

Types of Reinforcement Learning

1. Value Based RL

The agent learns the value of states or state action pairs. It chooses actions with maximum value.

  • Example. Q Learning

2. Policy Based RL

The agent learns the policy directly. It adjusts policy parameters to improve reward.

  • Example. REINFORCE

3. Actor Critic Methods

These methods combine value learning and policy learning.

  • Examples. A2C and PPO

Exploration vs Exploitation

The agent must explore actions to find better rewards. It must also exploit known good actions. RL balances both.

Popular Algorithms

  • Q Learning
  • Deep Q Network
  • PPO
  • SAC
  • A2C

Common RL Applications

  • Robotics control
  • Game playing
  • Recommendation systems
  • Autonomous navigation

Strengths

  • Learns through interaction
  • Improves with time
  • Works in dynamic environments

Limitations

  • Slow learning
  • Needs many interactions
  • Sensitive to reward design

Reinforcement Learning in Moroccan Darija

Reinforcement learning howa tariqa li kayt3llam fiha agent b interaction m3a environment. Agent kaydir action, kayakhod reward, w kayhssen policy.

Kif Kaykhddam

  • Agent kaychouf state.
  • Kaydir action.
  • Environment kayrje3 reward w state jdid.
  • Agent kayupdate policy.

Types

  • Value based. Q Learning.
  • Policy based. REINFORCE.
  • Actor critic. PPO.

Applications

  • Robots.
  • Games.
  • Recommendations.

Conclusion

Reinforcement learning builds agents that learn from rewards. It supports control, decision making, and adaptive behavior. It forms a strong branch of machine learning.

Share:

Unsupervised Learning in Machine Learning

Unsupervised Learning in Machine Learning

Unsupervised Learning

Unsupervised learning uses data without labels. The model explores patterns by itself. It groups data, detects structure, and reduces dimensions.

Core Idea

The model scans inputs and finds similarities. It identifies hidden groups or compressed representations. It works without labeled outputs.

How Unsupervised Learning Works

  • Collect unlabeled data.
  • Choose an algorithm.
  • Fit the model to the data.
  • Extract groups or patterns.

Main Types of Unsupervised Learning

1. Clustering

Clustering groups similar data points.

Examples

  • K Means
  • Hierarchical clustering
  • DBSCAN

2. Dimensionality Reduction

Dimensionality reduction compresses features and keeps core structure.

Examples

  • PCA
  • t SNE
  • UMAP

3. Association Rules

Association rules find links between items.

Examples

  • Market basket analysis
  • Apriori
  • FP Growth

4. Anomaly Detection

Anomaly detection identifies points that do not fit normal patterns.

Examples

  • Isolation forest
  • One class SVM

When to Use Unsupervised Learning

  • You do not have labels.
  • You want to explore data structure.
  • You want to group users, products, or signals.

Strengths

  • No need for labels
  • Reveals structure
  • Supports data exploration

Limitations

  • No clear accuracy metric
  • Results depend on chosen algorithm
  • Interpretation needs care

Common Applications

  • Customer segmentation
  • Anomaly detection
  • Document grouping
  • Feature compression

Unsupervised Learning in Moroccan Darija

Unsupervised learning kaykhdem bla labels. Model kay9lb 3la patterns men rasou. Kayjma3 data, kayhssb similarities, w kaybni structure jdida.

Types

  • Clustering. K Means w DBSCAN.
  • Dimensionality reduction. PCA.
  • Association rules. Apriori.
  • Anomaly detection.

Kif Kaykhddam

  • Kandkhlo data bla outputs.
  • Model kaydir grouping ola compression.
  • Kantla3o patterns m data.

Conclusion

Unsupervised learning explores data without labels. It finds groups and hidden structure. It supports discovery tasks in many fields.

Share:

Supervised Learning in Machine Learning

Supervised Learning in Machine Learning

Supervised Learning

Supervised learning is a machine learning approach that uses labeled data. Each input has a known output. The model learns the link between them. After training, the model predicts outputs for new inputs.

Core Idea

The model studies examples with labels. It learns patterns. It tries to reduce errors. Then it generalizes to unseen data.

How Supervised Learning Works

  • Collect data with labels.
  • Split data into training and testing sets.
  • Train a model on the training set.
  • Measure performance on the test set.
  • Use the model for real predictions.

Key Types of Supervised Learning

1. Classification

Classification predicts a class label. The output is discrete.

Examples

  • Email spam or not spam
  • Image category
  • Sentiment detection

2. Regression

Regression predicts a numeric value. The output is continuous.

Examples

  • House price
  • Sales numbers
  • Temperature prediction

Popular Supervised Algorithms

  • Linear regression
  • Logistic regression
  • Decision trees
  • Random forest
  • Support Vector Machine
  • KNN
  • Neural networks

When to Use Supervised Learning

  • You have labeled data.
  • You need precise predictions.
  • You want clear evaluation metrics.

Strengths

  • Strong performance with quality labels
  • Clear training process
  • Easy evaluation

Limitations

  • Needs labeled data
  • Labeling can take time
  • May not generalize well with weak data

Common Evaluation Metrics

For Classification

  • Accuracy
  • Precision
  • Recall
  • F1 score

For Regression

  • MSE
  • MAE
  • RMSE
  • R2 score

Supervised Learning in Moroccan Darija

Supervised learning howa type dial ML li kayst3mel data m3a labels. Kul input kaykoun 3ando output ma3rouf. Model kayt3llam had relation, w b3d kaydir predictions jdadin.

Kif Kaykhddam

  • Kandkhlo data mlabel.
  • Kandrbo model.
  • Kanchoufo performance f test set.
  • Kanst3mlo model f predictions.

Types

  • Classification. Output class.
  • Regression. Output number.

Algorithms

  • Linear regression.
  • Logistic regression.
  • Decision trees.
  • Random forest.
  • SVM.
  • KNN.
  • Neural networks.

Conclusion

Supervised learning depends on labeled data. It predicts classes or numbers with strong accuracy. It forms a key part of modern machine learning.

Share:

Attention Mechanism in Deep Learning

Attention Mechanism in Deep Learning

Attention Mechanism

The attention mechanism helps models focus on important parts of the input. It gives each token a weight. Higher weights mean more importance. This improves understanding of long sequences.

Why Attention Matters

  • Captures long range relations
  • Handles complex patterns
  • Improves context understanding
  • Works for text, images, and audio

Core Idea

Each token checks other tokens and decides how much to focus on them. The model uses queries, keys, and values to score relevance.

Components

  • Query. What the token needs.
  • Key. What the token offers.
  • Value. Information passed to the next layer.

How Attention Works

  • Compute similarity between query and all keys.
  • Convert scores into weights with softmax.
  • Multiply weights with values.
  • Sum the results to produce the final output.

Scaled Dot Product Attention

This is the standard attention in transformer models. It uses dot product between queries and keys and scales the result. The scaling stabilizes training.

Multi Head Attention

The model splits attention into multiple heads. Each head learns a different relationship. The outputs of all heads merge into one vector. This improves representation strength.

Types of Attention

1. Self Attention

Each token attends to itself and to all other tokens. This helps the model learn context inside the sequence.

2. Cross Attention

Used in encoder decoder models. The decoder attends to encoder outputs. This helps tasks like translation.

3. Local Attention

Attention applied to nearby tokens only. This reduces compute cost.

4. Global Attention

Some tokens attend across the entire sequence. Useful for long input models.

Applications of Attention

  • Machine translation
  • Text summarization
  • Question answering
  • Image captioning
  • Speech tasks

Strengths

  • Strong understanding of context
  • Parallel computation
  • Flexible for different data types

Limitations

  • High memory use
  • Heavy compute needs for long sequences

Attention Mechanism in Moroccan Darija

Attention hiyya tariqa kats3ed model ychouf l parts l mohimmin f input. Kul token kay7seb relevance dial tokens okhrin w kay3ti weights.

Kif Kaykhddam

  • Query bach n3rf shno bghina.
  • Key bach n3rf shno kay3ti token.
  • Value bach n9addo info.
  • Scores kayt7awlo weights b softmax.
  • Weights kaytjma3o m3a values.

Types

  • Self attention.
  • Cross attention.
  • Local.
  • Global.

Conclusion

Attention helps deep learning models focus on useful information. It supports strong performance in transformers and drives progress in modern AI.

Share:

Transformers in Deep Learning

Transformers in Machine Learning and Deep Learning

Transformers

Transformers are deep learning models that work with sequence data. They use attention to understand relations between tokens. They power modern models in language, vision, and multimodal AI.

Why Transformers Matter

  • They process sequences without recurrence.
  • They support parallel computation.
  • They capture long range patterns.

Core Architecture

1. Input Embeddings

Tokens turn into vectors. These vectors store meaning for each token.

2. Positional Encoding

Transformers add position information because they do not use recurrence.

3. Encoder Blocks

Each block uses attention and a feedforward layer. Encoders build strong representations for inputs.

4. Decoder Blocks

Decoders generate outputs. They use masked attention to avoid future tokens.

Attention Mechanism

Attention shows how much each token should focus on others. It uses three parts.

  • Query. Describes what the token needs.
  • Key. Describes what each token offers.
  • Value. Passes information to the next layer.

The model calculates attention scores and mixes values based on these scores.

Multi Head Attention

Attention splits into several heads. Each head learns a different relation. The results combine into one vector.

Feedforward Layers

Each block has a feedforward network. It adds non linear changes after attention.

Popular Transformer Models

BERT

Works for understanding text. Uses encoder blocks only.

GPT

Works for text generation. Uses decoder blocks only.

T5

Works with text to text tasks. Uses encoder and decoder.

Vision Transformers

Split images into patches and process them as sequences.

Training Transformers

  • Use large datasets.
  • Train with self supervised tasks.
  • Use Adam or AdamW optimizers.
  • Use attention masks for control.

Strengths of Transformers

  • Strong with long sequences.
  • Fast training with parallelism.
  • Flexible for many data types.

Limitations

  • High memory use
  • Heavy compute needs
  • Sensitive training process

Transformers in Moroccan Darija

Transformers hiyya models li katkhddm b attention. Kayfhamu relation bin tokens bla loops. Hadi khllathom ykouno qwiya f NLP w hatta f vision.

Kif Kaykhddmo

  • Embeddings bach n7awlo words vectors.
  • Positional encoding bach n3rfo trtib.
  • Attention bach token ychouf tokens okhrin.
  • Feedforward layers bach nzido transformation.

Models Mcharfin

  • BERT f understanding.
  • GPT f generation.
  • T5 f text to text.
  • Vision Transformers f images.

Conclusion

Transformers drive progress in modern AI. They use attention and parallel processing to learn strong patterns. They support language, vision, and multimodal tasks with high performance.

Share:

Graph Neural Networks in Machine Learning

Graph Neural Networks in Machine Learning

Graph Neural Networks

Graph Neural Networks (GNNs) work with data organized as graphs. A graph has nodes and edges. Nodes represent entities. Edges represent connections. GNNs learn patterns from both nodes and relationships.

Why GNNs Matter

Many real problems use graph structures: social networks, molecules, recommendation systems, and knowledge graphs. GNNs learn from connections and structure, not just raw features.

Core Idea

A GNN updates each node by collecting information from its neighbours. This method is called message passing. Over several layers, each node learns both local and global structure.

How GNNs Work

  • Start with initial node features.
  • Gather neighbour features.
  • Aggregate them using sum, mean, or attention.
  • Combine with the node’s own features.
  • Apply a neural layer.
  • Repeat across multiple GNN layers.

Message Passing

Message passing defines how nodes share information. Each GNN layer spreads features one hop away. After multiple layers, nodes capture multi-hop relationships across the graph.

Popular GNN Models

1. GCN

Graph Convolutional Network smooths features across the graph using a normalized adjacency matrix.

2. GAT

Graph Attention Network uses attention to weigh neighbours. Important nodes receive higher weights.

3. GraphSAGE

GraphSAGE samples neighbours and aggregates them. It works well for large graphs and inductive learning.

4. GIN

Graph Isomorphism Network offers strong expressive power. It distinguishes complex graph structures.

Common GNN Tasks

Node Classification

Predict a label for each node. Example: classifying users in a social network.

Graph Classification

Predict a label for the whole graph. Example: molecule toxicity prediction.

Link Prediction

Predict if a connection should exist. Example: recommending friends or items.

Recommendation

Use graph structure to suggest items based on relationships.

Strengths of GNNs

  • Capture relational information
  • Work on irregular, non-grid data
  • Generalize well across graph structures

Limitations

  • Slow on very large graphs
  • Over-smoothing when using many layers
  • Hard to interpret

Improving GNN Performance

  • Use attention layers
  • Limit depth to avoid over-smoothing
  • Sample neighbours (GraphSAGE)
  • Apply normalization and residual connections

GNNs in Moroccan Darija

Graph Neural Networks katkhddm m3a data li katsowwer graph. Nodes homa entities. Edges homa relations. Model kayt3llam men neighbours bach ybni representation qwiya.

Kif Kaykhddmo

  • Kaybda b node features.
  • Kayjma3 info men neighbours.
  • Kayupdate node.
  • Layers katsift info f jmi3 l graph.

Models

  • GCN bach ysmout features.
  • GAT b attention.
  • GraphSAGE bach ysampeli neighbours.

Tasks

  • Node classification.
  • Graph classification.
  • Link prediction.

Conclusion

GNNs learn from relational data using message passing. They support tasks in social networks, chemistry, recommendation systems, and more. They are a key part of modern deep learning.

Share:

Ai With Darija

Discover expert tutorials, guides, and projects in machine learning, deep learning, AI, and large language models . start learning to boot your carrer growth in IT تعرّف على دروس وتوتوريالات ، ومشاريع فـ الماشين ليرنين، الديب ليرنين، الذكاء الاصطناعي، والنماذج اللغوية الكبيرة. بّدا التعلّم باش تزيد تقدم فـ المسار ديالك فـ مجال المعلومات.

Blog Archive