Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental builds (faster publish) #130

Open
hamoid opened this issue Jun 6, 2013 · 8 comments
Open

Incremental builds (faster publish) #130

hamoid opened this issue Jun 6, 2013 · 8 comments
Labels

Comments

@hamoid
Copy link
Contributor

hamoid commented Jun 6, 2013

Due to the size of my blog, running build takes between 30 and 120 seconds.

There are 279 posts, but I suspect what makes it slow is the 220 Mb of images the posts contain.

Possible solutions:

  1. Use symlinks to media, instead of copying media files. This would also avoid having all files duplicated (in my case over 200 Mb "wasted")
  2. When copying media, copy it preserving attributes. Before copying media, check if items already exist and have the same size and timestamp. This would already avoid copying existing files, and might make it many times faster (if copying files is the issue).
  3. Keep an internal list of all files and detect changed ones. Then build only those that have changed. One way to do this is to write the code that explores all folders recursively and gets data for each file. To avoid making hundreds of calls to functions that get file info one could use the 'ls' command. Execute "ls -lR contents/articles/ >file_details.txt" and then compare the content of that text file with past versions, finding what has changed. Mmm I think I just forgot that there are Windows users out there... :)

Thanks!

@jnordberg
Copy link
Owner

The problem is that not all content is derived from a file. I'm thinking of adding something like a isModifiedSince method to content plugins that you can implement to enable incremental builds. This would also allow for caching in the preview server.

@hamoid
Copy link
Contributor Author

hamoid commented Jun 25, 2013

Sounds like a good solution.

@mjpizz
Copy link

mjpizz commented Jul 3, 2013

Do you think incremental builds could be done in a way that enables caching to disk? For example, to survive a restart of the preview server.

@jnordberg
Copy link
Owner

@mjpizz yeah, that would be possible. When this is implemented i plan to put a memory cache in front of the preview server, that cache could just as easily be a disk cache. But why would you want to have persistent caching for the preview server?

@mjpizz
Copy link

mjpizz commented Jul 4, 2013

The main issue is performance during development and previewing. Once we're over 1000 articles, it would be nice if wintersmith didn't have a 30s-1min bootup time, and if individual page changes were updated more-or-less instantly.

For example: consider a "related articles" plugin. This might require scanning all 1000 articles to find the ones that share a specific tag before rendering. Cold booting the preview server in this case might become a pretty intensive process of basically rebuilding an in-memory index every time somebody sits down to hack on the design.

Optional cache persistence via a simple database (or even JSON flatfile) would probably help cold-boot-for-large-websites quite a bit. Do you think it makes sense?

@jnordberg
Copy link
Owner

@mjpizz hmm, the cache im talking about here wouldn't help in that case. Wintersmith needs to build up the content tree on startup, that would have to be a separate cache and each content plugin instance have to be able to be (de)serialized . When harmony proxies lands in node we could do something like only create the instances on the content tree when they are accessed, that along with the output cache would solve most performance problems for large projects.

@tusharmath
Copy link
Contributor

Sorry guys but this might be a little tangent - How and where do you deploy your content? I personally have created a Gruntfile to publish to Github which I don't feel is very elegant.

@hamoid
Copy link
Contributor Author

hamoid commented Oct 7, 2013

@tusharmath Very tangent :) I rsync to my own server. It takes seconds.

@jnordberg jnordberg added the 3.0 label Nov 19, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants