一、为什么用MINIO
分布式OSS存储系统有很多MinIO、FastDFS、GlusterFS、HFS等等。轻量级的则为MinIO、FastDFS居多。至于用什么,需要从社区活跃度、集成难易度等等去考虑,总之成本优先。
那宏观的表现就是GitHub的数据对比。
minio:github.com/minio/minio
fastdfs:github.com/happyfish10…
简单对比就知道应该怎么选,最关键的一点MinIO支持S3协议。
二、本次部署遇到的反向代理问题
由于本次部署采用docker-compose部署minio纠错码模式。虽然minio可以通过环境变量来创建ACCESS KEYS等参数。但是还是希望能够代理处Minio的控制台。
问题在于minio中文社区及官网似乎都只直接暴漏OSS节点IP和端口直接访问的,线上环境不可能能这么弄。线上业务应用调用MINIO地址肯定是使用局域网地址。
三、部署
3.1安装docker及安装docker-compose
docker自行安装
docker-compose安装
curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
修改文件夹权限
chmod +x /usr/local/bin/docker-compose
建立软连接
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
检查是否安装成功
docker-compose --version
3.2安装Minio集群
3.2.1docker-compose.yml
version: '3.7'
# 所有容器通用的设置和配置
x-minio-common: &minio-common
image: minio/minio
command: server --console-address ":9001" http://minio{1...2}/data
expose:
- "9000"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin*****
# CONSOLE_SUBPATH: /s3/
MINIO_BROWSER_REDIRECT_URL: http://你的代理服务器IP/域名:30100/s3/
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
# 启动2个docker容器运行minio服务器实例
# 使用nginx反向代理9000端口,负载均衡, 你可以通过9001、9002端口访问它们的web console
services:
minio1:
<<: *minio-common
hostname: minio1
ports:
- "9001:9001"
volumes:
- ./data/data1:/data
minio2:
<<: *minio-common
hostname: minio2
ports:
- "9002:9001"
volumes:
- ./data/data2:/data
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
volumes:
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "9000:9000"
depends_on:
- minio1
- minio2
重点在于,如果需要做反向代理,需要设置minio的CONSOLE_SUBPATH地址
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin*****
# CONSOLE_SUBPATH: /s3/
MINIO_BROWSER_REDIRECT_URL: http://你的代理服务器IP/域名:30100/s3/
3.2.2接着新建文件夹 config,新建配置 nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# include /etc/nginx/conf.d/*.conf;
upstream minio {
server minio1:9000;
server minio2:9000;
}
server {
listen 9000;
listen [::]:9000;
server_name localhost;
# To allow special characters in headers
ignore_invalid_headers off;
# Allow any size file to be uploaded.
# Set to a value such as 1000m; to restrict file size to a specific value
client_max_body_size 0;
# To disable buffering
proxy_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
# Default is HTTP/1, keepalive is only enabled in HTTP/1.1
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
}
此NG负责MINIO和业务系统SDK调用使用的。
3.2.3二级目录反向代理minio-console控制台。
此处只需要代理控制台地址,写一个或者多个实例端口都行。实际OSS存储的文件,也不可能对外暴漏出来直接访问的,肯定要走业务侧加权一次。
location /s3/ {
rewrite ^/s3/(.*)$ /$1 break;
proxy_pass http://192.168.0.120:9001;
//此处只需要代理控制台地址,写一个或者多个实例端口都行。实际OSS存储的文件,也不可能对外暴漏出来直接访问的,肯定要走业务侧加权一次。
}
二级目录反向代理minio其实和代理kibana这类类似,都需要后面的代理能够通配二级目录。minio的CONSOLE_SUBPATH和MINIO_BROWSER_REDIRECT_URL,参数就是起到这个作用。
3.2.4验证
如果不设置subpath,则肯定css、js等静态资源无法访问
相关问题github上的缺项也有人提出不过是。翻译不难找出答案 github.com/minio/minio…
四、业务应用集成
业务应用集成的的访问站点,则需要访问我们之前docker-compose配置的nginx地址,局域网地址接口。
4.1java集成
4.1.1依赖
<dependency>
<groupId>io.minio</groupId>
<artifactId>minio</artifactId>
<version>7.0.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/commons-io/commons-io -->
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.8.0</version>
</dependency>
4.1.2API操作
private String endpoint = "http://192.168.81.1280:9000";
private String accessKey = "****";
private String secretKey = "*****";
@Test
void contextLoads() throws Exception{
// 1.使用MinIo服务的URL,端口 账号和密码 创建一个 MinIoClient对象
MinioClient minioClient = new MinioClient(endpoint, accessKey, secretKey);
boolean isExists = minioClient.bucketExists("test");
if(isExists){
System.out.println("已经存在了 test 这个 Bucket");
}else{
minioClient.makeBucket("test");
}
// 存储文件到 存储桶中
minioClient.putObject("test","123.xml","d:/123xml",null);
System.out.println("文件上传成功...");
// 下载文件
InputStream in = minioClient.getObject("test", "/group1/UserMapper.xml");
List<String> strings = IOUtils.readLines(in, "UTF-8");
strings.stream().forEach(s -> System.out.println(s));
}
4.1.3 API封装
实际项目肯定需要预先粉状,类似某依框架。
public class OssClient {
private final String configKey;
private final OssProperties properties;
private final AmazonS3 client;
public OssClient(String configKey, OssProperties ossProperties) {
this.configKey = configKey;
this.properties = ossProperties;
try {
AwsClientBuilder.EndpointConfiguration endpointConfig =
new AwsClientBuilder.EndpointConfiguration(properties.getEndpoint(), properties.getRegion());
AWSCredentials credentials = new BasicAWSCredentials(properties.getAccessKey(), properties.getSecretKey());
AWSCredentialsProvider credentialsProvider = new AWSStaticCredentialsProvider(credentials);
ClientConfiguration clientConfig = new ClientConfiguration();
if (OssConstant.IS_HTTPS.equals(properties.getIsHttps())) {
clientConfig.setProtocol(Protocol.HTTPS);
} else {
clientConfig.setProtocol(Protocol.HTTP);
}
AmazonS3ClientBuilder build = AmazonS3Client.builder()
.withEndpointConfiguration(endpointConfig)
.withClientConfiguration(clientConfig)
.withCredentials(credentialsProvider)
.disableChunkedEncoding();
if (!StringUtils.containsAny(properties.getEndpoint(), OssConstant.CLOUD_SERVICE)){
// minio 使用https限制使用域名访问 需要此配置 站点填域名
build.enablePathStyleAccess();
}
this.client = build.build();
createBucket();
} catch (Exception e) {
if (e instanceof OssException) {
throw e;
}
throw new OssException("配置错误! 请检查系统配置:[" + e.getMessage() + "]");
}
}
public void createBucket() {
try {
String bucketName = properties.getBucketName();
if (client.doesBucketExistV2(bucketName)) {
return;
}
CreateBucketRequest createBucketRequest = new CreateBucketRequest(bucketName);
createBucketRequest.setCannedAcl(CannedAccessControlList.PublicRead);
client.createBucket(createBucketRequest);
client.setBucketPolicy(bucketName, getPolicy(bucketName, PolicyType.READ));
} catch (Exception e) {
throw new OssException("创建Bucket失败, 请核对配置信息:[" + e.getMessage() + "]");
}
}
public UploadResult upload(byte[] data, String path, String contentType) {
return upload(new ByteArrayInputStream(data), path, contentType);
}
public UploadResult upload(InputStream inputStream, String path, String contentType) {
try {
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType(contentType);
metadata.setContentLength(inputStream.available());
PutObjectRequest putObjectRequest = new PutObjectRequest(properties.getBucketName(), path, inputStream, metadata);
// 设置上传对象的 Acl 为公共读
putObjectRequest.setCannedAcl(CannedAccessControlList.PublicRead);
client.putObject(putObjectRequest);
} catch (Exception e) {
throw new OssException("上传文件失败,请检查配置信息:[" + e.getMessage() + "]");
}
return UploadResult.builder().url(getUrl() + "/" + path).filename(path).build();
}
public void delete(String path) {
path = path.replace(getUrl() + "/", "");
try {
client.deleteObject(properties.getBucketName(), path);
} catch (Exception e) {
throw new OssException("上传文件失败,请检查配置信息:[" + e.getMessage() + "]");
}
}
public UploadResult uploadSuffix(byte[] data, String suffix, String contentType) {
return upload(data, getPath(properties.getPrefix(), suffix), contentType);
}
public UploadResult uploadSuffix(InputStream inputStream, String suffix, String contentType) {
return upload(inputStream, getPath(properties.getPrefix(), suffix), contentType);
}
/**
* 获取文件元数据
*
* @param path 完整文件路径
*/
public ObjectMetadata getObjectMetadata(String path) {
S3Object object = client.getObject(properties.getBucketName(), path);
return object.getObjectMetadata();
}
public Object getObject(String path) {
S3Object object = client.getObject(properties.getBucketName(), path);
return object;
}
public String getUrl() {
String domain = properties.getDomain();
String endpoint = properties.getEndpoint();
String header = OssConstant.IS_HTTPS.equals(properties.getIsHttps()) ? "https://" : "http://";
// 云服务商直接返回
if (StringUtils.containsAny(endpoint, OssConstant.CLOUD_SERVICE)){
if (StringUtils.isNotBlank(domain)) {
return header + domain;
}
return header + properties.getBucketName() + "." + endpoint;
}
// minio 单独处理
if (StringUtils.isNotBlank(domain)) {
return header + domain + "/" + properties.getBucketName();
}
return header + endpoint + "/" + properties.getBucketName();
}
public String getPath(String prefix, String suffix) {
// 生成uuid
String uuid = IdUtil.fastSimpleUUID();
// 文件路径
String path = DateUtils.datePath() + "/" + uuid;
if (StringUtils.isNotBlank(prefix)) {
path = prefix + "/" + path;
}
return path + suffix;
}
public String getConfigKey() {
return configKey;
}
private static String getPolicy(String bucketName, PolicyType policyType) {
StringBuilder builder = new StringBuilder();
builder.append("{\n"Statement": [\n{\n"Action": [\n");
if (policyType == PolicyType.WRITE) {
builder.append(""s3:GetBucketLocation",\n"s3:ListBucketMultipartUploads"\n");
} else if (policyType == PolicyType.READ_WRITE) {
builder.append(""s3:GetBucketLocation",\n"s3:ListBucket",\n"s3:ListBucketMultipartUploads"\n");
} else {
builder.append(""s3:GetBucketLocation"\n");
}
builder.append("],\n"Effect": "Allow",\n"Principal": "*",\n"Resource": "arn:aws:s3:::");
builder.append(bucketName);
builder.append(""\n},\n");
if (policyType == PolicyType.READ) {
builder.append("{\n"Action": [\n"s3:ListBucket"\n],\n"Effect": "Deny",\n"Principal": "*",\n"Resource": "arn:aws:s3:::");
builder.append(bucketName);
builder.append(""\n},\n");
}
builder.append("{\n"Action": ");
switch (policyType) {
case WRITE:
builder.append("[\n"s3:AbortMultipartUpload",\n"s3:DeleteObject",\n"s3:ListMultipartUploadParts",\n"s3:PutObject"\n],\n");
break;
case READ_WRITE:
builder.append("[\n"s3:AbortMultipartUpload",\n"s3:DeleteObject",\n"s3:GetObject",\n"s3:ListMultipartUploadParts",\n"s3:PutObject"\n],\n");
break;
default:
builder.append(""s3:GetObject",\n");
break;
}
builder.append(""Effect": "Allow",\n"Principal": "*",\n"Resource": "arn:aws:s3:::");
builder.append(bucketName);
builder.append("/*"\n}\n],\n"Version": "2012-10-17"\n}\n");
return builder.toString();
}
}
4.1.4 业务代码鉴权下载文件。
controller
@GetMapping("/downOssload/{id}")
public ResponseEntity<?> download(HttpServletResponse response, @PathVariable String id) throws Exception {
DiskFileVo file = iDiskFileService.queryById(Long.parseLong(id));
if(file == null ){
return ResponseEntity
.ok()
.body(R.fail("文件不存在"));
}
byte[] data = iDiskOssFileService.downOssload(response, file);
ByteArrayResource resource = new ByteArrayResource(data);
return ResponseEntity
.ok()
.contentLength(data.length)
.header("Content-type","application/octet-stream")
.header("content-disposition","attachment; filename="" + file.getFileName() + """)
.header("Cache-Control", "no-cache")
.body(resource);
}
service
@Override
@Transactional(rollbackFor = Exception.class)
public byte[] downOssload(HttpServletResponse response, DiskFileVo file) throws Exception {
OssClient storage = OssFactory.instance();
try {
byte[] content;
final S3Object s3Object = (S3Object) storage.getObject(file.getFilePath());
final S3ObjectInputStream stream = s3Object.getObjectContent();
content = IOUtils.toByteArray(stream);
s3Object.close();
return content;
} catch (IOException ioException) {
ioException.printStackTrace();
} catch (AmazonServiceException serviceException) {
serviceException.printStackTrace();
} catch (AmazonClientException clientException) {
clientException.printStackTrace();
}
return null;
}