Ordenar
Orden
Estilo de lista

- Miembros: 4
- Publicaciones: 11 (Ø 1)
- Cantidad de cursos: 11
- Precio / Mes: $9
- Community type: Hobby
- Creado: 01 '01Europe/Berlin' Oct, 2025
- Propietario: Vuk Rosić
- Nombre en la URL: become-ai-researcher-2669
Bome AI Researcher - Step by Step
Learn how to think, code, and experiment like engineers at OpenAI, DeepMind, and Anthropic.
7-Day FREE TRIAL — 0 experience required.
Publish your own papers.
📐 Mathematics Fundamentals
- Functions
- Derivatives
- Vectors
- Matrices
- Gradients
🔥 PyTorch Fundamentals
- Creating Tensors
- Tensor Addition
- Matrix Multiplication
- Transposing Tensors
- Reshaping Tensors
- Indexing & Slicing
🧠 Neuron From Scratch
- What is a Neuron
- Linear Step
- Activation Function
- Neuron in Python
- Loss
⚡ Activation Functions
- ReLU
- Sigmoid
- Tanh
- SiLU
🔗 Neural Networks
- Network Architecture
- Layers
- Network Implementation
- Chain Rule
- Gradients
- Backprop
👁️ Attention Mechanism
- What is Attention
- Self-Attention
- Attention Scores
- Attention Weights
- Multi-Head Attention
- Attention in Code
🔀 Transformer Feedforward
- Feedforward Layer
- The Expert
- The Gate
- Combining Experts
- MoE
🏗️ Building a Transformer
- Transformer Architecture
- RoPE
- Transformer Block
- Final Linear Layer
- Training
Learn how to think, code, and experiment like engineers at OpenAI, DeepMind, and Anthropic.
7-Day FREE TRIAL — 0 experience required.
Publish your own papers.
📐 Mathematics Fundamentals
- Functions
- Derivatives
- Vectors
- Matrices
- Gradients
🔥 PyTorch Fundamentals
- Creating Tensors
- Tensor Addition
- Matrix Multiplication
- Transposing Tensors
- Reshaping Tensors
- Indexing & Slicing
🧠 Neuron From Scratch
- What is a Neuron
- Linear Step
- Activation Function
- Neuron in Python
- Loss
⚡ Activation Functions
- ReLU
- Sigmoid
- Tanh
- SiLU
🔗 Neural Networks
- Network Architecture
- Layers
- Network Implementation
- Chain Rule
- Gradients
- Backprop
👁️ Attention Mechanism
- What is Attention
- Self-Attention
- Attention Scores
- Attention Weights
- Multi-Head Attention
- Attention in Code
🔀 Transformer Feedforward
- Feedforward Layer
- The Expert
- The Gate
- Combining Experts
- MoE
🏗️ Building a Transformer
- Transformer Architecture
- RoPE
- Transformer Block
- Final Linear Layer
- Training