You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/modules/ROOT/pages/sql-gen-data.adoc
+24-13Lines changed: 24 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -6,11 +6,11 @@ Make sure to rename this file to the name of your repository and add the filenam
6
6
= Generating Streaming data using SQL
7
7
// Add required variables
8
8
:page-layout: tutorial
9
-
:page-product: cloud // Required: Define the product filter for this tutorial. Add one of the following: platform, imdg, cloud, operator
10
-
:page-categories: Stream Processing, Get Started, SQL // Optional: Define the categories for this tutorial. Check the current categories on the tutorials homepage (https://docs.hazelcast.com/tutorials/). Add one or more of the existing categories or add new ones as a comma-separated list. Make sure that you use title case for all categories.
11
-
:page-lang: sql // Optional: Define what Hazelcast client languages are supported by this tutorial. Leave blank or add one or more of: java, go, python, cplus, node, csharp.
12
-
:page-enterprise: // Required: Define whether this tutorial requires an Enterprise license (true or blank)
13
-
:page-est-time: 10 mins // Required: Define the estimated number of time required to complete the tutorial in minutes. For example, 10 mins
9
+
:page-product: cloud
10
+
:page-categories: Stream Processing, Get Started, SQL
11
+
:page-lang: sql
12
+
:page-enterprise:
13
+
:page-est-time: 10 mins
14
14
:description: Use SQL on Hazelcast to generate randomized streaming data for demo/POC purposes.
15
15
// Required: Summarize what this tutorial is about in a sentence or two. What you put here is reused as the tutorial's first paragraph and included in HTML description tags. Start the sentence with an action verb such as 'Deploy' or 'Connect'.
16
16
@@ -38,9 +38,9 @@ In an upcoming release of Hazelcast, the Kafka Connect connector will expose SQL
38
38
39
39
Before starting this tutorial, make sure that you meet the following prerequisites:
40
40
41
-
* Running cluster of Hazelcast
41
+
* Running cluster of Hazelcast, either Viridian or Platform
42
42
* Connection to SQL command line, either through CLC or through Management Center
43
-
* (Optional) a Kafka instance accessible by your Hazelcast cluster
43
+
* (For Step 2) a Kafka instance accessible by your Hazelcast cluster
44
44
45
45
46
46
== Step 1. Generating Data
@@ -55,8 +55,9 @@ You can choose one of these approaches to write your tutorial part:
55
55
Whatever option you choose when designing your tutorial should be carried through in subsequent parts.
56
56
////
57
57
58
-
The following code is from the link:https://docs.hazelcast.com/tutorials/SQL-Basics-on-Viridian[SQL Basics on Viridian (Stock Ticker Demo) tutorial]. The comments break down what each part of the code is doing.
58
+
. Look through the following SQL code from the link:https://docs.hazelcast.com/tutorials/SQL-Basics-on-Viridian[SQL Basics on Viridian (Stock Ticker Demo)] tutorial. The comments explain what the code is doing.
59
59
60
+
+
60
61
```sql
61
62
CREATE OR REPLACE VIEW trades AS
62
63
SELECT id,
@@ -99,7 +100,9 @@ FROM
99
100
<4> We seed the timestamp with a base value that equates to 21 Feb 2022. You can change this to any reasonable Unix timestamp.
100
101
<5> The `generate_stream` function is what makes this all work. In this example, we're generating 100 events per second.
101
102
102
-
Once this view is created, you can access it via SQL. Because you're looking at streaming data, you'll need to use CTRL-C to stop each query.
103
+
. Paste the above code into your SQL interface.
104
+
105
+
. Verify that the data is being generated using SQL queries. Because you're looking at streaming data, you'll need to use CTRL-C to stop each query.
103
106
104
107
```sql
105
108
SELECT * from trades;
@@ -109,7 +112,6 @@ FROM trades;
109
112
```
110
113
111
114
112
-
113
115
== Step 2. Inserting into Kafka
114
116
115
117
////
@@ -118,7 +120,7 @@ Continue the design approach you chose in the previous part and continue it thro
118
120
119
121
You can send generated data to Kafka. Kafka will store and replay it as it would data from any other streaming source. Instead of creating a view local to Hazelcast, you'll create a mapping within SQL for the Kafka topic, then use the `INSERT` function to send generated data to that topic.
120
122
121
-
. First, create a mapping for the data. Include all the fields that you'll generate with SQL. This creates a topic in Kafka as well as m
123
+
. First, create a mapping for the data. Include all the fields that you'll generate with SQL. This statement creates the `trades` topic in Kafka.
122
124
+
123
125
```sql
124
126
CREATE or REPLACE MAPPING trades (
@@ -133,7 +135,8 @@ OPTIONS (
133
135
'bootstrap.servers' = 'broker:9092'
134
136
);
135
137
```
136
-
. Next, use the `INSERT` function to send the data to the `trades` topic you just created.
138
+
. Next, use the `INSERT` function to send the data to the `trades` topic you just created. The code to generate the data is exactly the same; the only difference is that we're sending it to Kafka instead of creating a local view.
139
+
+
137
140
```sql
138
141
INSERT INTO trades
139
142
SELECT id,
@@ -167,7 +170,15 @@ FROM
167
170
ROUND(RAND()*100, 0) as amount
168
171
FROM TABLE(generate_stream(100))); <5>
169
172
```
170
-
The code to generate the data is exactly the same; the only difference is that we're sending it to Kafka instead of creating a local view.
173
+
174
+
. You can now query this data as above using SQL.
175
+
+
176
+
You can also access this streaming data with the Pipeline API using the following call. (For details on setting up the Kakfa sources properties, see the link:https://docs.hazelcast.com/hazelcast/5.3/integrate/kafka-connector[Apache Kafka Connector]) section of the documentation.
0 commit comments