From 021ee6ef261490f42a2cf32cd4738ea7a649c242 Mon Sep 17 00:00:00 2001 From: Yibing Liu Date: Wed, 15 Nov 2017 18:26:18 +0800 Subject: [PATCH 1/2] fix doc for Docker --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ca1469266..2552bad21 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ ## Installation -To avoid the trouble of environment setup, [running in docker container](#running-in-docker-container) is highly recommended. Otherwise follow the guidelines below to install the dependencies manually. +To avoid the trouble of environment setup, [running in Docker container](#running-in-docker-container) is highly recommended. Otherwise follow the guidelines below to install the dependencies manually. ### Prerequisites - Python 2.7 only supported @@ -344,7 +344,7 @@ Take several steps to launch the Docker image: - Download the Docker image ```bash -nvidia-docker pull paddlepaddle/models:deep-speech-2 +sudo nvidia-docker pull paddlepaddle/models:deep-speech-2 ``` - Clone this repository From 75f591d2262ecf484b9d1ad940f472a8cc337332 Mon Sep 17 00:00:00 2001 From: Yibing Liu Date: Tue, 21 Nov 2017 17:13:12 +0800 Subject: [PATCH 2/2] update the rebuilt docker repo's name in doc --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 8e5485fcb..81467b24d 100644 --- a/README.md +++ b/README.md @@ -351,19 +351,19 @@ Take several steps to launch the Docker image: - Download the Docker image ```bash -sudo nvidia-docker pull paddlepaddle/models:deep-speech-2 +nvidia-docker pull paddlepaddle/deep_speech:latest-gpu ``` - Clone this repository ``` -git clone https://github.com/PaddlePaddle/models.git +git clone https://github.com/PaddlePaddle/DeepSpeech.git ``` - Run the Docker image ```bash -sudo nvidia-docker run -it -v $(pwd)/models:/models paddlepaddle/models:deep-speech-2 /bin/bash +sudo nvidia-docker run -it -v $(pwd)/DeepSpeech:/DeepSpeech paddlepaddle/deep_speech:latest-gpu /bin/bash ``` Now go back and start from the [Getting Started](#getting-started) section, you can execute training, inference and hyper-parameters tuning similarly in the Docker container.