diff --git a/docs/getting_started.rst b/docs/getting_started.rst index 12b7f9d..a25511e 100644 --- a/docs/getting_started.rst +++ b/docs/getting_started.rst @@ -29,11 +29,11 @@ In this example, the file :samp:`main.py` contains a function :samp:`my_task` th def my_task(ctx: mlxp.Context)->None: # Displaying user-defined options from './configs/config.yaml - print("ctx.config") + print(ctx.config) # Logging information in log directory created by MLXP: (here "./logs/1" ) - for i in range(ctx.config.num_epoch) - ctx.logger.log_metrics({"epoch":i}) + for i in range(ctx.config.num_epoch): + ctx.logger.log_metrics({"epoch":i}, log_name="Quickstart") diff --git a/docs/tutorial_introduction.rst b/docs/tutorial_introduction.rst index 4de8199..79b2086 100644 --- a/docs/tutorial_introduction.rst +++ b/docs/tutorial_introduction.rst @@ -19,22 +19,22 @@ The first step is to create a directory 'tutorial' containing the code needed fo tutorial/ ├── configs/ │ └── config.yaml - ├── core.py + ├── core_app.py ├── main.py └── read.py -The directory contains three files: :samp:`core.py`, :samp:`main.py` :samp:`results.py`. It also contains a directory :samp:`configs` that will be used later by MLXP. For now, we will only have a look at the :samp:`core.py` and :samp:`main.py` files. +The directory contains three files: :samp:`core_app.py`, :samp:`main.py` :samp:`read.py`. It also contains a directory :samp:`configs` that will be used later by MLXP. For now, we will only have a look at the :samp:`core_app.py` and :samp:`main.py` files. -The :samp:`core.py` file +The :samp:`core_app.py` file """""""""""""""""""""""" -The file :samp:`core.py` contains a PyTorch implementation of a one hidden layer network :samp:`OneHiddenLayer` as well as a simple data loader :samp:`DataLoader` that we will use during training. -In the rest of the tutorial, we will not need to worry about the content of :samp:`core.py`, but let's just have a quick look at this file: +The file :samp:`core_app.py` contains a PyTorch implementation of a one hidden layer network :samp:`OneHiddenLayer` as well as a simple data loader :samp:`DataLoader` that we will use during training. +In the rest of the tutorial, we will not need to worry about the content of :samp:`core_app.py`, but let's just have a quick look at this file: .. code-block:: python - :caption: main.py + :caption: core_app.py import torch import torch.nn as nn diff --git a/tutorial/README.rst b/tutorial/README.rst index 253be81..449de66 100644 --- a/tutorial/README.rst +++ b/tutorial/README.rst @@ -4,7 +4,7 @@ MLXP Tutorial This page presents the main functionalities of MLXP through a tutorial. See the following page for more detailed information: -- `Tutorial `__ +- `Tutorial `__ Requirements ------------ diff --git a/tutorial/core_app.py b/tutorial/core_app.py index a162477..f687bcb 100644 --- a/tutorial/core_app.py +++ b/tutorial/core_app.py @@ -27,8 +27,8 @@ def __init__(self, d_int, device, normalize=False): self.X = torch.normal(mean= torch.zeros(N_samples,d_int,dtype=dtype,device=device),std=1.) if normalize: - inv_norm = 1./tr.norm(self.X,dim=1) - self.X = tr.einsum('nd,n->nd',self.X,inv_norm) + inv_norm = 1./torch.norm(self.X,dim=1) + self.X = torch.einsum('nd,n->nd',self.X,inv_norm) self.total_size = N_samples