Using the full image of Intel Python from Docker Hub doesn't provide the needed libraries. When using Docker to set up Jupyter notebooks for the Python distribution, it is possible to use the already prepared image or to use an image as a base when customizing your own. Using this command, conda can be used to install Intel Python, seaborn, and any other data science libraries you may need or want. If a pip magic and conda magic similar to the above were added to Jupyter's default set of magic commands, I think it could go a long way toward solving the common problems that users have when trying to install Python packages for use with Jupyter notebooks. For Python kernels, this will point to a particular Python version, but Jupyter is designed to be much more general than this: Jupyter has for languages including Python 2, Python 3, Julia, R, Ruby, Haskell, and even C++ and Fortran! Both point to different pythons within the same virtual environment. If conda tells you the package you want doesn't exist, then use pip or try , which has more packages available than the default conda channel.
To run the notebook: Important Jupyter installation requires Python 3. What was happening was that my ipython notebook was using the global instance of notebook installation since my newly created conda environment did not have jupyter notebook installed in it. Run an Image After the image is built, you can check Dockers image registry on your local machine to see the image in the list. These files are called notebooks, and the best part about them is, they are both, a human-readable file with code, the result of the interpretor session, analysis results, supporting figures, and as well as an executable file, that you can run the code in them and have the result right-away. Just make sure your plots and figures are not on interactive mode otherwise they will not be displayed. Not all without issues, but I managed.
Then you can copy the file to the correct location. One final addendum: I have a huge amount of respect and appreciation for the developers of Jupyter, conda, pip, and related tools that form the foundations of the Python data science ecosystem. As noted above, we can get around this by explicitly identifying where we want packages to be installed. Here also is my conda info: Current conda install: platform : osx-64 conda version : 4. I found that the kernel wasn't running the intended version of python I wanted.
To get started using a Docker image with Jupyter notebooks, I downloaded the image I wanted from Docker Hub and set up a volume to use with the image. Dockerfile for Customization To create a customized Docker image based on Intel Python that can be run in Jupyter notebooks I set up a Dockerfile with based on the Docker Hub Dockerfile's from Intel Python. We really need to redesign that system. I looked at the sys. This allows for files to be easily accessible and version controlled after closing down the notebook.
But the downloading process is still not working. You might need to do this for every jupyter session, as the instances don't seem to save the changes in the terminal across sessions. I have a few ideas, some of which might even be useful: Potential Changes to Jupyter As I mentioned, the fundamental issue is a mismatch between Jupyter's shell environment and compute kernel. Installing jupyter into the target environment corrected the problem. For what it's worth tl;dr I couln't get import pymc3 as pm to work on Jupyter Lab so I had to resort to notebook.
It happened too be that it had similar name since it comes from similar environment. These packages can then be installed with the conda dependency and environment manager. Often this will be your home directory. I had a similar issue. I also used conda install command for installing all the packages.
Have a question about this project? So I thought there may be some compatible things behind Jupyter Notebook. This is one reason that pip install no longer appears in , and experienced Python educators like David Beazley. Python packages must be installed separately for each copy of Python you use, and if you are using virtualenvs or conda envs, packages must be installed into each environment where you need them. Anaconda conveniently installs Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science. After proposing some simple solutions that can be used today, I went into a detailed explanation of why these solutions are necessary: it comes down to the fact that in Jupyter, the kernel is disconnected from the shell. For my purposes I used the full version of Intel Python 2. For example, I installed the tensorflow in a conda environment named 'tensorflow3'.
Either the package is not installed in one, or a different version of the package is installed. Still - I cannot import it restarting kernel, etc. Click on the Untitled text next to the Jupyter on the top of the notebook and rename the file something meaningful. I know it is not exactly what you want but have you tried to print it to pdf using your browser ctrl+p? Maybe, I should sit down and write a post about this. Here is a short snippet that should work in general: That bit of extra boiler-plate makes certain that you are running the pip version associated with the current Python kernel, so that the installed packages can be used in the current notebook. New Jupyter Magic Functions Even if the above changes to the stack are not possible or desirable, we could simplify the user experience somewhat by introducing %pip and %conda magic functions within the Jupyter notebook that detect the current kernel and make certain packages are installed in the correct location.