1. 从集合读取数据
- fromCollection
从集合中获取数据
// 定义样例类,传感器id,时间戳,温度
case class SensorReading(id: String, timestamp: Long, temperature: Double)
object Sensor {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val stream1 = env
.fromCollection(List(
SensorReading("sensor_1", 1547718199, 35.8),
SensorReading("sensor_6", 1547718201, 15.4),
SensorReading("sensor_7", 1547718202, 6.7),
SensorReading("sensor_10", 1547718205, 38.1)
))
stream1.print("stream1:").setParallelism(1)
env.execute()
}
}
- fromEleements
//不常用
env.fromElements(1, 2, "hhh")
- fromParalleCollection
//不常用
//从指定的序列中获取
env.fromParallelCollection(new NumberSequenceIterator(1L, 1000L))
2. 从文件读取数据
env.readTextFile("YOUR_FILE_PATH")
3. 以kafka队列数据作为数据来源
- 引入依赖包
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.11_2.11</artifactId>
<version>1.10.0</version>
</dependency>
- 实例
val properties = new Properties()
properties.setProperty("bootstrap.servers", "mayi101:9092")
properties.setProperty("group.id", "consumer-group")
properties.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
properties.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
properties.setProperty("auto.offset.reset", "latest")
val stream3 = env.addSource(new FlinkKafkaConsumer011[String]("sensor",
new SimpleStringSchema(), properties))
- 测试
kafka-console-producer.sh \
> --broker-list mayi101:9092 --topic flink
4. 自定义source
除了以上的source数据来源,我们还可以自定义source。需要做的,只是传入一个SourceFunction就可以 参考自定义source一文
env.addSource( new MySensorSource() )