diff --git a/tutorial-2/kafkaconsumer_to_multipledestinations.md b/tutorial-2/kafkaconsumer_to_multipledestinations.md
index 073941b..cd1bde0 100644
--- a/tutorial-2/kafkaconsumer_to_multipledestinations.md
+++ b/tutorial-2/kafkaconsumer_to_multipledestinations.md
@@ -1,12 +1,12 @@
-## Part 2 - Reading from a Kafka Consumer
+## Part 2 - Reading with a Kafka Consumer
-In this part of the tutorial we will setup a pipeline that drains data from a Kafka Consumer, makes a couple of transformations and writes to multiple destinations.
+In this part of the tutorial we set up a pipeline that drains data from a Kafka topic, makes a couple of transformations and writes to multiple destinations.
-* Note: *If you'd like, feel free to download a previously created [pipeline](pipelines/KafkaConsumer_to_MultipleDestinations.json) that has been configured with the contents of this tutorial.*
+* Note: *If you'd like, you can download a previously created [pipeline](pipelines/KafkaConsumer_to_MultipleDestinations.json) that has been configured with the contents of this tutorial.*
-You may remember the data we are reading simulates credit card information and contains the card number :
+You may remember the data we are reading simulates credit card information and contains the card number, as follows:
```json
{
"transaction_date":"dd/mm/YYYY",
@@ -17,27 +17,27 @@ You may remember the data we are reading simulates credit card information and c
"description":"transaction description of the purchase"
}
```
-We don't want to store credit card information in any of our data stores so this is a perfect opportunity to sanitize the data before it gets there. We'll use a few built in transformation stages to mask the card numbers so what makes it through are just the last 4 digits.
+We don't want to store credit card information in any of our data stores so this is a perfect opportunity to sanitize the data before it gets there. We'll use a few built in processor stages to mask the card numbers so what makes it through are just the last 4 digits.
-#### Defining the source
-* Drag the 'Kafka Consumer' origin stage into your canvas.
+#### Defining the Origin
+* Drag the Kafka Consumer origin stage into your canvas.
-* Go to the 'General' Tab in its configuration and select the version of Kafka that matches your environment in the 'Stage Library' dropdown.
+* In the Configuration settings, click the General tab. For Stage Library, select the version of Kafka that matches your environment.
-* In the 'Kafka' Tab pick 'SDC Record' as the Data Format, you may remember from Part 1 of this tutorial we sent data through Kafka in this format, so we want to make sure we decode the incoming data appropriately.
+* In the Kafka tab set the Data Format to SDC Record. You may remember from Part 1 of this tutorial we sent data to Kafka in this format, so we want to make sure we decode the incoming data appropriately.
* Set the Broker URI, Zookeeper URI and topic name to match the settings in your environment.
#### Field Converter
-* It so happens that the card number field is defined as an integer in Avro. We will want to convert this to a string value. So type '/card_number' in the 'Fields to Convert' text box and set it to type String in 'Convert to String'
+* It so happens that the card number field is defined as an integer in Avro. We will want to convert this to a string value. So type "/card_number" in the Fields to Convert text box and set Convert to Type to String.
#### Jython Evaluator
-* In this stage we'll use a small piece of python code to look at the first few digits of the card number and figure out what type of card it is. We'll add that card type to a new field called 'credit_card_type'.
+* In this stage, we'll use a small piece of python code to look at the first few digits of the card number and figure out what type of card it is. We'll add that card type to a new field called "credit_card_type."
-Go to the 'Jython' tab of the Jython Evaluator and enter the following piece of code.
+Go to the Jython tab of the Jython Evaluator and enter the following piece of code:
```python
@@ -78,43 +78,41 @@ for record in records:
-* In the 'Field Masker' stage configuration type '/card_number', set the mask type to custom. In this mode you can use '#' to show characters and any other character to use as a mask. e.g. a mask to show the last 4 digits of a credit card number :
+* In the Field Masker properties, click the Mask tab. In the Fields To Mask property, type "/card_number" and set the mask type to Custom. In this mode, you can use '#' to show characters and any other character to use as a mask. For example, to mask all but the last 4 digits of the following credit card number: "0123 4567 8911 0123".
+
+ You can use the following mask:"---- ---- ---- ####".
- '0123 4567 8911 0123' would be
-
- '---- ---- ---- ####' will change the value to
-
- '---- ---- ---- 0123'
+This changes the value to "---- ---- ---- 0123".
#### Destinations
-In this particular example we will write the results to 2 destinations : Elasticsearch and an Amazon S3 bucket.
+In this particular example we will write the results to 2 destinations: Elasticsearch and an Amazon S3 bucket.
-##### Setting up ElasticSearch
+##### Setting up Elasticsearch
-* Drag and Drop a 'ElasticSearch' stage to the Canvas.
+* Drag an Elasticsearch destination to the canvas.
-* Go to its Configuration and select the 'General' Tab. In the drop down for 'Stage Library' select the version of ElasticSearch you are running.
+* In its Configuration settings, select the General tab. For the Stage Library property, select the version of Elasticsearch you are running.
-* Go to the 'ElasticSearch' Tab and in the 'Cluster Name' textbox enter the name of your cluster as specified in elasticsearch.yml
+* Go to the Elasticsearch tab and in the Cluster Name property enter the name of your cluster as specified in elasticsearch.yml.
-* In the 'Cluster URI' field specify the host:port where your ElasticSearch service is running
+* For Cluster URI, specify the host:port where your Elasticsearch service runs.
-* In 'Index' and 'Mapping' textboxes specify the name of your index.
+* In the Index and Mapping properties, specify the name of your index and mapping.
-##### Writing to an Amazon S3 bucket
-A common usecase is to backup data to S3, in this example we'll convert the data back to Avro format and store it there.
+##### Writing to an Amazon S3 Bucket
+A common use case is to back up data to S3. In this example, we'll convert the data back to Avro data format and store it there.
-* Drag and drop the 'Amazon S3' stage to the canvas.
+* Drag an Amazon S3 destination to the canvas.
-* In its configuration enter in your 'Access Key ID' and 'Secret Access Key', select the 'Region' and enter the 'Bucket' name and 'Folder' you want to store the files in.
+* In its Configuration settings, on the Amazon S3 tab, enter in your Access Key ID and Secret Access Key, select the Region and enter the Bucket name and Folder you want to store the files in.
-* Pick Avro in the 'Data Format' drop down.
+* Pick Avro in the Data Format menu.
-* Go to the 'Avro' tab, you will need to specify the schema that you want encoded. Type in :
+* In the Avro tab, you need to specify the schema that you want encoded. Type in:
```json
{"namespace" : "cctest.avro",
"type": "record",
@@ -134,7 +132,7 @@ A common usecase is to backup data to S3, in this example we'll convert the data
-* To save space on the S3 buckets lets compress the data as its written. Choose BZip2 as the 'Avro Compression Codec'.
+* To save space on the S3 buckets, let's compress the data as it's written. Choose BZip2 as the Avro Compression Codec.
#### Execute the Pipeline
-* Hit Run and the pipeline should start draining Kafka messages and writing them to Elastic and Amazon S3.
+* Hit Start and the pipeline should start draining Kafka messages and writing them to Elasticsearch and Amazon S3.