-
Notifications
You must be signed in to change notification settings - Fork 0
jacarr4/randomized-optimization-final
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
My code can be accessed at: git@github.com:jacarr4/randomized-optimization-final.git There are several required python packages: numpy, pandas, scikit-learn, and mlrose_hiive. There may be others. There are several scripts of interest. To generate hyperparameters for the four RO problems: * python -m find_hyperparameters --learner <learner> --problem <problem> Learner can be one of: * hill_climbing * simulated_annealing * genetic_alg * mimic Problem can be one of: * onemax * flipflop * four_peaks This script will display plots for the hyperparameter choices (which can be tweaked by editing the code - optimize_<alg>_hyperparams methods) To run the four RO problems: * python -m randomized_optimization --problem <problem> Problem can be one of: * onemax * flipflop * four_peaks This script will display fitness vs iterations and fitness vs evaluations curves. You can edit the code in the solveWith<Method> methods to plug in hyperparameters generated by find_hyperparameters.py. To run the neural network problem: * python -m compute_neural_network --learner <learner> [--plot] Learner can be one of: * gradient_descent * random_hill_climb * simulated_annealing * genetic_alg This will run the neural network using the given learner to train it. If run with --plot, the script will plot the learning and train time curves. There is a loop in the run method that is helpful for training one hyperparameter at a time. That is all. Thank you! Jacob Carr
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published