Skip to content
This repository has been archived by the owner on Jun 14, 2018. It is now read-only.

0.2.4: many test failures #15

Open
QuLogic opened this issue Mar 2, 2018 · 8 comments
Open

0.2.4: many test failures #15

QuLogic opened this issue Mar 2, 2018 · 8 comments
Labels

Comments

@QuLogic
Copy link
Contributor

QuLogic commented Mar 2, 2018

I'm trying to upgrade to 0.2.3, but many of the tests fail. Basically, every image touched in b4a8b30 is no longer working. If I patch in the files as they were before this commit, then everything passes. I also don't see any code changes that indicate that these images should have been changed.

@jflesch
Copy link
Member

jflesch commented Mar 2, 2018

Can you remind me in what environment your working (OS, distribution, architecture, version of Pillow, etc) ? Do you test using make test, a virtualenv without system site packages or do you use system site packages ?

By the way, I've done many changes to the build and test systems. I haven't had time to announce it yet on the mailing-list, sorry. You can still run tests the old way, but I would advise running make test instead (which will then run tox, who will then take care of creating a virtualenv).

Just for reference, here is the output of the buildbot: https://origami.openpaper.work/#/builders/5/builds/147 . This builder runs the test on a Debian stable amd64. While it's not obvious in the output, Pillow is not installed system-wide therefore Tox had to install it prior to the tests and so these tests were run with Pillow 5.0.

@jflesch jflesch added the bug label Mar 2, 2018
@QuLogic
Copy link
Contributor Author

QuLogic commented Mar 3, 2018

Fedora 26+, x86_64. Rawhide and F28 use Pillow 5.0.0, and 26 and 27 use some older versions. I am building system packages, so there's no need for a virtualenv.

I did notice the addition of make test but I don't use it as Tox is unnecessary. I just run nosetests directly.

@jflesch
Copy link
Member

jflesch commented Mar 3, 2018

Well, this is going to be a tricky one ... :/

@jflesch
Copy link
Member

jflesch commented Mar 4, 2018

Hm, I think the update in the test data was related to that commit: 9719e41 .
In other words, I've started running tests in a virtualenv (without access to system packages) to not be dependent on the version of Pillow installed on the system (debian stable still provides Pillow 4.0.0 for instance). And this is when I had to update the data.
If I run the test using the python3-pillow installed on my Debian stable (the official Debian package), tests fail.

My guess would be that the Pillow from pypi (wheels) was compiled with options slightly different than Debian or Fedora.

@jflesch
Copy link
Member

jflesch commented Mar 4, 2018

Yep, confirmed, it comes from Pillow.
When I run the test from a virtualenv, it installs the Pillow lib from Pypi, and then the tests pass.
This is the only dependency of pypillowfight and therefore the only difference between virtualenv / non-virtualenv.

@QuLogic
Copy link
Contributor Author

QuLogic commented Mar 4, 2018

Strange, I don't see any patches in Fedora for Pillow, unless it's to something lower-level like libjpeg. What if you use the virtualenv, but build Pillow from source and not the wheels?

@QuLogic QuLogic changed the title 0.2.3: many test failures 0.2.4: many test failures Apr 11, 2018
@QuLogic
Copy link
Contributor Author

QuLogic commented Apr 11, 2018

Still a problem with 0.2.4.

@jflesch
Copy link
Member

jflesch commented Apr 11, 2018

Weird, wheels or not, tests don't pass either on one of my other systems (Intel Atom + Debian stable amd64).

GerHobbelt pushed a commit to GerHobbelt/libpillowfight that referenced this issue Sep 4, 2023
GerHobbelt pushed a commit to GerHobbelt/libpillowfight that referenced this issue Sep 4, 2023
Convert tests images to png

Closes openpaperwork#15

See merge request World/OpenPaperwork/libpillowfight!14
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants