Skip to content

Commit

Permalink
Merge pull request #53 from pitmonticone/main
Browse files Browse the repository at this point in the history
Fix typos
  • Loading branch information
chriselrod committed Apr 15, 2022
2 parents 2485545 + aa54ef1 commit bba406b
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/src/examples/mnist.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ corresponding to the model:
```julia
@time p = SimpleChains.init_params(lenet);
```
The convolutional layers are initialized with a Glorot (Xavier) unifirom distribution,
The convolutional layers are initialized with a Glorot (Xavier) uniform distribution,
while the dense layers are initialized with a Glorot (Xaviar) normal distribution.
Biases are initialized to zero.
Because the number of parameters can be a function of the input size, these must
Expand All @@ -75,7 +75,7 @@ G = SimpleChains.alloc_threaded_grad(lenetloss);
Here, we're estimating that the number of physical cores is half the number of threads
on an `x86_64` system, which is true for most -- but not all!!! -- of them.
Otherwise, we're assuming it is equal to the number of threads. This is of course also
likely to be wrong, e.g. recent Power CPUs may habe 4 or even 8 threads per core.
likely to be wrong, e.g. recent Power CPUs may have 4 or even 8 threads per core.
You may wish to change this, or use [Hwloc.jl](https://github.com/JuliaParallel/Hwloc.jl) for an accurate number.

Now that this is all said and done, we can train for `10` epochs using the `ADAM` optimizer
Expand Down

0 comments on commit bba406b

Please sign in to comment.