变声FMOD SDK 研究

2,103 阅读16分钟

0.官方文档

建议自学能力强的同学直接看官方文档即可


1.下载

1.1下载

登陆FMOD官方网站,打开下载地址,选择对应的平台和版本,流程:FMOD Studio Suit -> FMOD Engine 选择版本,平台选Android

1.2目录层次简要

解压后进入fmodstudioapi{版本号}/api/core/目录

  • example:提供了两种NDK编译方式cmake、ndkbuild的各种功能演示Demo
  • inc:FMOD提供的辅助cpp源文件
  • lib:fmod.jar和对应不同ABI架构下的动态so库,按需导入即可

另外如果想了解每个Demo大致演示了什么功能,可以打开对应Demo下的app/src/main/cpp/{项目名}.cpp文件,查看文件顶部的DOC说明即可


2.导入变声相关Demoeffects

Dem路径:/fmodstudioapi20104android/api/core/examples/androidstudio/cmake/effects

个人认为从官方Demo入手了解是入门的最佳途径。

注意不要单独把Demo的目录拉出来,因为它内部有使用到外部目录下的一些库文件。

这个Demo非常简单,大致演示了lowpass、highpass、echo、flange四种变声效果。

同时可以了解Demo是如何以cmake方式编译NDK的,如果后续配置NDK出现问题,可参照Demo修改。


3.新建项目

3.1 新建项目

  • 新建项目时选择模板Native C++

3.2 导入相关so库

打开解压后目录:fmodstudioapi20104android/api/core/

  • inclib导入相应的目录,如下图:
  • 配置选择支持的ABI架构 打开app目录下的build.gradle文件,配置NDK架构
 defaultConfig {
        ...
        ndk {
            abiFilters "x86", "armeabi-v7a", "arm64-v8a"
            //注意这里一旦选择支持的ABI架构后,可以删除libs目录下其它ABI架构的冗余so库
            //一般来说这3中abi架构足以代表当前市面上的所有Android机型
        }
    }
  • 配置CMakeLists.txt文件

为了方便配置,我把该文件移动到了如下目录层次结构:

当然,移动后需要在project级别的build.gradle中修改CMakeLists的路径:

android{
   ...
   
   //指定CMAKELists.txt 路径、版本号,文件路径不加前缀默认是build.gradle的同级目录
    externalNativeBuild {
        cmake {
            path "CMakeLists.txt"
            version "3.10.2"
        }
    }
   }
    

CMakeLists.txt的详细配置

CMakeLists的配置比较复杂,建议先学习下基本语法

这里尤其要注意该文件所处的项目路径,会影响配置中的路径

配置好之后先运行一下,看下有无问题再进行其它步骤

cmake_minimum_required(VERSION 3.4.1)

#设置即将导入的libfmod.so和libfmodL.so所在的目录变量
## ${CMAKE_SOURCE_DIR} CMakeLists.txt文件所在的根目录,这里指app/目录
## ${ANDROID_ABI} 这里指build.gragle文件中设定支持的abi架构
set(FMOD_LIB_DIR ${CMAKE_SOURCE_DIR}/libs/${ANDROID_ABI})


# 设置加载源文件路径 my_source
file(GLOB my_source src/main/cpp/*.cpp src/main/cpp/*.c src/main/cpp/*/*.cpp src/main/cpp/*/*.c)

#添加预构建的NDK库
find_library( # Defines the name of the path variable that stores the
        # location of the NDK library.
        log-lib

        # Specifies the name of the NDK library that
        # CMake needs to locate.
        log)


#添加libfmod.so库
add_library(
        libfmod
        SHARED
        IMPORTED
)

#指定libfmod.so库的路径
set_target_properties(
        libfmod
        PROPERTIES IMPORTED_LOCATION
        ${FMOD_LIB_DIR}/libfmod.so
)


#添加libfmodL.so库
add_library(
        libfmodL
        SHARED
        IMPORTED
)


#指定libfmodL.so库的路径
set_target_properties(
        libfmodL
        PROPERTIES IMPORTED_LOCATION
        ${FMOD_LIB_DIR}/libfmodL.so
)


#添加自定义库
add_library(
        myfmod #设置这个自定义库的名字为myfmod
        SHARED #设定属性为"SHARED"
        ${my_source} #指定一个或多个文件,为后面编写的源文件(C/C++文件)提供相对路径
)


#指定需要动态链接的库,以及指定最终要编译出的so库
#这里库文件的顺序符合gcc链接顺序的规则,即被依赖的库放在依赖它的库的后面
target_link_libraries(myfmod libfmod libfmodL ${log-lib})

顺带一提:libfmod.solibfomdL.so两者的区别仅仅是后者提供了日志记录功能,根据项目本身需要适配的ABI架构以及是否需要日志记录功能,再导入相应的so库,可有效缩减apk体积


4.FMOD基础

4.1 初始化

//装载库文件,这里的库名字要根据你
System.loadLibrary("libfmod")
System.loadLibrary("libfmodL")
System.loadLibrary("myfmod") 

//建议在主Activity的onCreate中调用FMOD初始化方法
org.fmod.FMOD.init(this);

4.2 DSP效果器

FMOD SDK提供了一系列的DSP效果器可以让开发者快速实现音频参数调节。具体可参看demo里的文件fmod_dsp_effects.h

typedef enum
{
    FMOD_DSP_TYPE_UNKNOWN,
    FMOD_DSP_TYPE_MIXER,
    FMOD_DSP_TYPE_OSCILLATOR,
    FMOD_DSP_TYPE_LOWPASS,
    FMOD_DSP_TYPE_ITLOWPASS,
    FMOD_DSP_TYPE_HIGHPASS,
    FMOD_DSP_TYPE_ECHO,
    FMOD_DSP_TYPE_FADER,
    FMOD_DSP_TYPE_FLANGE,
    FMOD_DSP_TYPE_DISTORTION,
    FMOD_DSP_TYPE_NORMALIZE,
    FMOD_DSP_TYPE_LIMITER,
    FMOD_DSP_TYPE_PARAMEQ,
    FMOD_DSP_TYPE_PITCHSHIFT,
    FMOD_DSP_TYPE_CHORUS,
    FMOD_DSP_TYPE_VSTPLUGIN,
    FMOD_DSP_TYPE_WINAMPPLUGIN,
    FMOD_DSP_TYPE_ITECHO,
    FMOD_DSP_TYPE_COMPRESSOR,
    FMOD_DSP_TYPE_SFXREVERB,
    FMOD_DSP_TYPE_LOWPASS_SIMPLE,
    FMOD_DSP_TYPE_DELAY,
    FMOD_DSP_TYPE_TREMOLO,
    FMOD_DSP_TYPE_LADSPAPLUGIN,
    FMOD_DSP_TYPE_SEND,
    FMOD_DSP_TYPE_RETURN,
    FMOD_DSP_TYPE_HIGHPASS_SIMPLE,
    FMOD_DSP_TYPE_PAN,
    FMOD_DSP_TYPE_THREE_EQ,
    FMOD_DSP_TYPE_FFT,
    FMOD_DSP_TYPE_LOUDNESS_METER,
    FMOD_DSP_TYPE_ENVELOPEFOLLOWER,
    FMOD_DSP_TYPE_CONVOLUTIONREVERB,
    FMOD_DSP_TYPE_CHANNELMIX,
    FMOD_DSP_TYPE_TRANSCEIVER,
    FMOD_DSP_TYPE_OBJECTPAN,
    FMOD_DSP_TYPE_MULTIBAND_EQ,
    FMOD_DSP_TYPE_MAX,
    FMOD_DSP_TYPE_FORCEINT = 65536    /* Makes sure this enum is signed 32bit. */
} FMOD_DSP_TYPE;

以上是支持的DSP效果器,例如当希望对音频进行 echo(回声)和 pitch(音高)处理时,核心代码如下,原理很简单,直接在相应音频播放的channel上叠加DSP效果器即可


System *system;
Channel *channel;
Sound *sound;

//...省略初始化过程

//回声效果器
DSP *echoDsp;
system->createDSPByType(FMOD_DSP_TYPE_ECHO, &echoDsp);                    // 控制回声
echoDsp->setParameterFloat(FMOD_DSP_ECHO_DELAY, echoDelay);               // 延时,单位ms,10~5000,默认500
echoDsp->setParameterFloat(FMOD_DSP_ECHO_FEEDBACK, echoFeedback);         // 回波衰减的延迟 ,0~100,100无衰减,0全部衰减,默认50
echoDsp->setParameterFloat(FMOD_DSP_ECHO_DRYLEVEL, echoDryLevel);         // 原声音量,单位db,范围-80~10,默认0
echoDsp->setParameterFloat(FMOD_DSP_ECHO_WETLEVEL, echoWetLevel);         // 回音信号音量,单位db,范围-80~10,默认0
channel->addDSP(0, echoDsp);


//音调效果器
DSP *pitchDsp;
system->createDSPByType(FMOD_DSP_TYPE_PITCHSHIFT, &pitchDsp);// 可改变音调Pitch
pitchDsp->setParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, pitch);//1.0是标准值,上下每0.5个值变化代表一个8度,范围0.5~2.0
pitchDsp->setParameterFloat(FMOD_DSP_PITCHSHIFT_FFTSIZE, fftSize);
channel->addDSP(0, pitchDsp);


//...省略播放过程

其它的DSP效果器依此类推即可。

4.3 多音频融合

多音频融合很简单,同时在多个声轨channel上播放对应音频文件即可

system->playSound(sound1, 0, false, &channel1);
system->playSound(sound2, 0, false, &channel2);
system->playSound(sound3, 0, false, &channel3);
...

4.4 保存经过音频处理后的音频文件

核心代码

 system->setOutput(FMOD_OUTPUTTYPE_WAVWRITER);                        //保存文件格式为WAV
 system->init(32, FMOD_INIT_NORMAL | FMOD_INIT_PROFILE_ENABLE, {输出路径});//保存合成效果的核心代码

接下来对音频进行处理后,正常播放即可,会在指定路径输出wav格式的音频文件

4.5 测试过程中写的仅供参考的核心c代码

注:不许吐槽笔者写的C代码很混乱,笔者也是赶鸭子上架临时写的,大伙凑活着看吧....

#include <jni.h>
#include <string>
#include "aisound.h"
#include <fmod.hpp>
#include <android/log.h>
#include <unistd.h>
#include <cstring>
#include <fmod_errors.h>

#define TAG "FMOD"
#define LOGI(FORMAT, ...) __android_log_print(ANDROID_LOG_INFO,TAG,FORMAT,##__VA_ARGS__);
#define LOGD(FORMAT, ...) __android_log_print(ANDROID_LOG_DEBUG,TAG,FORMAT,##__VA_ARGS__);
#define LOGE(FORMAT, ...) __android_log_print(ANDROID_LOG_ERROR,TAG,FORMAT,##__VA_ARGS__);

using namespace FMOD;

/**
 * *******************************************************************        混合播放变声效果    ********************************************************************
 */


/**
 * 辅助方法
 */
void printFmodResult(FMOD_RESULT result) {
    LOGE("%s", FMOD_ErrorString(result))
}


/**
 * ****************************************************** fmod 参数调节  Start ***********************************************************
 */


/**
 * applyXXXEffect 一系列函数: 变声效果应用函数
 * @param env JNI             工具辅助环境变量
 * @param sound_effect_bean   java层传递给该JNI层的jobject对象,具体类型为:SoundEffectBean,里面封装了变声的各种变声调节参数值
 * @param system              变声播放Fmod::System实例
 * @param adjustChannel       变声播放指定声轨(声道)Channel
 */
void applyChorusSoundEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *adjustChannel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);

    //和声
    jmethodID getChorusEffect = env->GetMethodID(cls, "getChorusEffect", "()Lio/microshow/aisound/soundeffect/type/ChorusEffect;");
    jobject chorusEffectObject = env->CallObjectMethod(sound_effect_bean, getChorusEffect);


    if (chorusEffectObject != NULL) {
        jclass chorusEffectClass = env->GetObjectClass(chorusEffectObject);

        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(chorusEffectObject, env->GetMethodID(chorusEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jfloat chorusDepth = env->CallFloatMethod(chorusEffectObject, env->GetMethodID(chorusEffectClass, "getChorusDepth", "()F"));
        jfloat chorusMix = env->CallFloatMethod(chorusEffectObject, env->GetMethodID(chorusEffectClass, "getChorusMix", "()F"));
        jfloat chorusRate = env->CallFloatMethod(chorusEffectObject, env->GetMethodID(chorusEffectClass, "getChorusRate", "()F"));

        DSP *chorusDsp;
        system->createDSPByType(FMOD_DSP_TYPE_CHORUS, &chorusDsp);               /* This unit produces a chorus effect on the sound. */
        chorusDsp->setParameterFloat(FMOD_DSP_CHORUS_DEPTH, chorusDepth);        /* (Type:float) - Chorus modulation depth.  0.0 to 100.0.  Default = 3.0. */
        chorusDsp->setParameterFloat(FMOD_DSP_CHORUS_MIX, chorusMix);            /* (Type:float) - Volume of original signal to pass to output.  0.0 to 100.0. Default = 50.0. */
        chorusDsp->setParameterFloat(FMOD_DSP_CHORUS_RATE, chorusRate);          /* (Type:float) - Chorus modulation rate in Hz.  0.0 to 20.0.  Default = 0.8 Hz. */
        adjustChannel->addDSP(0, chorusDsp);

        LOGI("%s%.2f", "chorusDepth:", chorusDepth)
        LOGI("%s%.2f", "chorusMix:", chorusMix)
        LOGI("%s%.2f", "chorusRate:", chorusRate)
    }
}

void applyEchoEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getEchoEffect = env->GetMethodID(cls, "getEchoEffect", "()Lio/microshow/aisound/soundeffect/type/EchoEffect;");
    jobject echoEffectObject = env->CallObjectMethod(sound_effect_bean, getEchoEffect);
    if (echoEffectObject != NULL) {
        jclass echoEffectClass = env->GetObjectClass(echoEffectObject);

        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(echoEffectObject, env->GetMethodID(echoEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jfloat echoDelay = env->CallFloatMethod(echoEffectObject, env->GetMethodID(echoEffectClass, "getEchoDelay", "()F"));
        jfloat echoFeedback = env->CallFloatMethod(echoEffectObject, env->GetMethodID(echoEffectClass, "getEchoFeedback", "()F"));
        jfloat echoDryLevel = env->CallFloatMethod(echoEffectObject, env->GetMethodID(echoEffectClass, "getEchoDryLevel", "()F"));
        jfloat echoWetLevel = env->CallFloatMethod(echoEffectObject, env->GetMethodID(echoEffectClass, "getEchoWetLevel", "()F"));


        DSP *echoDsp;
        system->createDSPByType(FMOD_DSP_TYPE_ECHO, &echoDsp);                    // 控制回声
        echoDsp->setParameterFloat(FMOD_DSP_ECHO_DELAY, echoDelay);               // 延时,单位ms,10~5000,默认500
        echoDsp->setParameterFloat(FMOD_DSP_ECHO_FEEDBACK, echoFeedback);         // 回波衰减的延迟 ,0~100,100无衰减,0全部衰减,默认50
        echoDsp->setParameterFloat(FMOD_DSP_ECHO_DRYLEVEL, echoDryLevel);         // 原声音量,单位db,范围-80~10,默认0
        echoDsp->setParameterFloat(FMOD_DSP_ECHO_WETLEVEL, echoWetLevel);         // 回音信号音量,单位db,范围-80~10,默认0
        channel->addDSP(0, echoDsp);


        LOGI("%s%.2f", "echoDelay:", echoDelay)
        LOGI("%s%.2f", "echoFeedback:", echoFeedback)
        LOGI("%s%.2f", "echoDryLevel:", echoDryLevel)
        LOGI("%s%.2f", "echoWetLevel:", echoWetLevel)
    }
}

void applyEqEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getEqEffect = env->GetMethodID(cls, "getEqEffect", "()Lio/microshow/aisound/soundeffect/type/EqEffect;");
    jobject eqEffectObject = env->CallObjectMethod(sound_effect_bean, getEqEffect);
    if (eqEffectObject != NULL) {
        jclass eqEffectClass = env->GetObjectClass(eqEffectObject);


        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(eqEffectObject, env->GetMethodID(eqEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jfloat lowGain = env->CallFloatMethod(eqEffectObject, env->GetMethodID(eqEffectClass, "getLowGain", "()F"));
        jfloat midGain = env->CallFloatMethod(eqEffectObject, env->GetMethodID(eqEffectClass, "getMidGain", "()F"));
        jfloat highGain = env->CallFloatMethod(eqEffectObject, env->GetMethodID(eqEffectClass, "getHighGain", "()F"));
        jfloat lowToMidCrossover = env->CallFloatMethod(eqEffectObject, env->GetMethodID(eqEffectClass, "getLowToMidCrossover", "()F"));
        jfloat midToHighCrossover = env->CallFloatMethod(eqEffectObject, env->GetMethodID(eqEffectClass, "getMidToHighCrossover", "()F"));
        jint crossOverSlope = env->CallIntMethod(eqEffectObject, env->GetMethodID(eqEffectClass, "getCrossOverSlope", "()I"));


        DSP *eqAdjustDsp;
        system->createDSPByType(FMOD_DSP_TYPE_THREE_EQ, &eqAdjustDsp);

        //以下设置两个分频点将音频划为 三段: 0~400  400~4000  4000~
        eqAdjustDsp->setParameterFloat(FMOD_DSP_THREE_EQ_LOWCROSSOVER, lowToMidCrossover);//低中交叉频率,低/中分频点的意思,范围10~22000,默认400.低频保留越多,声音越低沉.
        eqAdjustDsp->setParameterFloat(FMOD_DSP_THREE_EQ_HIGHCROSSOVER, midToHighCrossover);//中高交叉频率,中/高分频点的意思,范围10~22000,默认4000.高频保留越多,声音越有穿透力.设置为10KHZ的混响能让声音直冲云霄,当然低些更自然.
        eqAdjustDsp->setParameterInt(FMOD_DSP_THREE_EQ_CROSSOVERSLOPE, crossOverSlope);//分频的衰减斜率,有三种选项:1=12db/Octave ,2=24db/Octave ,3=48db/Octave.默认值是1.

        //以下设置三段音频的db调整值
        eqAdjustDsp->setParameterFloat(FMOD_DSP_THREE_EQ_LOWGAIN, lowGain);//低频率db增益,范围-80~10 ,默认值 0
        eqAdjustDsp->setParameterFloat(FMOD_DSP_THREE_EQ_MIDGAIN, midGain);//中频率db增益,范围-80~10 ,默认值 0
        eqAdjustDsp->setParameterFloat(FMOD_DSP_THREE_EQ_HIGHGAIN, highGain);//高频率db增益,范围-80~10 ,默认值 0

        channel->addDSP(0, eqAdjustDsp);


        LOGI("%s%.2f", "lowGain:", lowGain)
        LOGI("%s%.2f", "midGain:", midGain)
        LOGI("%s%.2f", "highGain:", highGain)
        LOGI("%s%.2f", "lowToMidCrossover:", lowToMidCrossover)
        LOGI("%s%.2f", "midToHighCrossover:", midToHighCrossover)
        LOGI("%s%d", "crossOverSlope:", crossOverSlope)
    }
}

void applyFlangeEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getFlangeEffect = env->GetMethodID(cls, "getFlangeEffect", "()Lio/microshow/aisound/soundeffect/type/FlangeEffect;");
    jobject flangeEffectObject = env->CallObjectMethod(sound_effect_bean, getFlangeEffect);
    if (flangeEffectObject != NULL) {
        jclass flangeEffectClass = env->GetObjectClass(flangeEffectObject);

        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(flangeEffectObject, env->GetMethodID(flangeEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jfloat flangeMix = env->CallFloatMethod(flangeEffectObject, env->GetMethodID(flangeEffectClass, "getFlangeMix", "()F"));
        jfloat flangeDepth = env->CallFloatMethod(flangeEffectObject, env->GetMethodID(flangeEffectClass, "getFlangeDepth", "()F"));
        jfloat flangeRate = env->CallFloatMethod(flangeEffectObject, env->GetMethodID(flangeEffectClass, "getFlangeRate", "()F"));


        DSP *flangeDsp;
        system->createDSPByType(FMOD_DSP_TYPE_FLANGE, &flangeDsp);
        flangeDsp->setParameterFloat(FMOD_DSP_FLANGE_MIX, flangeMix);                 //范围 0 ~ 100 ,          默认50
        flangeDsp->setParameterFloat(FMOD_DSP_FLANGE_DEPTH, flangeDepth);             //范围 0.01 ~ 1.0 ,      默认值 1.0
        flangeDsp->setParameterFloat(FMOD_DSP_FLANGE_RATE, flangeRate);               //范围 0.0 ~ 20 ,单位Hz  默认值 0.1
        channel->addDSP(0, flangeDsp);

        LOGI("%s%.2f", "flangeMix:", flangeMix)
        LOGI("%s%.2f", "flangeDepth:", flangeDepth)
        LOGI("%s%.2f", "flangeRate:", flangeRate)
    }
}

void applyFrequencyEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channle) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getFrequencyEffect = env->GetMethodID(cls, "getFrequencyEffect", "()Lio/microshow/aisound/soundeffect/type/FrequencyEffect;");
    jobject frequencyEffectObject = env->CallObjectMethod(sound_effect_bean, getFrequencyEffect);
    if (frequencyEffectObject != NULL) {
        jclass frequencyEffectClass = env->GetObjectClass(frequencyEffectObject);


        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(frequencyEffectObject, env->GetMethodID(frequencyEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jfloat frequency = env->CallFloatMethod(frequencyEffectObject, env->GetMethodID(frequencyEffectClass, "getFrequency", "()F"));


        //语速倍速
        float originalFrequency;
        channle->getFrequency(&originalFrequency);
        channle->setFrequency(originalFrequency * frequency);

        LOGI("%s%.2f", "frequencyTimes:", frequency)
    }

}

void applyPitchEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getPitchEffect = env->GetMethodID(cls, "getPitchEffect", "()Lio/microshow/aisound/soundeffect/type/PitchEffect;");
    jobject pitchEffectObject = env->CallObjectMethod(sound_effect_bean, getPitchEffect);
    if (pitchEffectObject != NULL) {
        jclass pitchEffectClass = env->GetObjectClass(pitchEffectObject);

        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(pitchEffectObject, env->GetMethodID(pitchEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jfloat pitch = env->CallFloatMethod(pitchEffectObject, env->GetMethodID(pitchEffectClass, "getPitch", "()F"));
        jfloat fftSize = env->CallFloatMethod(pitchEffectObject, env->GetMethodID(pitchEffectClass, "getFftSize", "()F"));


        DSP *pitchDsp;
        system->createDSPByType(FMOD_DSP_TYPE_PITCHSHIFT, &pitchDsp);// 可改变音调Pitch
        pitchDsp->setParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, pitch);//1.0是标准值,上下每0.5个值变化代表一个8度,范围0.5~2.0
        pitchDsp->setParameterFloat(FMOD_DSP_PITCHSHIFT_FFTSIZE, fftSize);
        channel->addDSP(0, pitchDsp);

        LOGI("%s%.2f", "pitch:", pitch)
        LOGI("%s%.2f", "fftSize:", fftSize)
    }
}

void applyTremoloEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getTremoloEffect = env->GetMethodID(cls, "getTremoloEffect", "()Lio/microshow/aisound/soundeffect/type/TremoloEffect;");
    jobject tremoloEffectObject = env->CallObjectMethod(sound_effect_bean, getTremoloEffect);
    if (tremoloEffectObject != NULL) {
        jclass tremoloEffectClass = env->GetObjectClass(tremoloEffectObject);


        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(tremoloEffectObject, env->GetMethodID(tremoloEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jfloat tremoloDepth = env->CallFloatMethod(tremoloEffectObject, env->GetMethodID(tremoloEffectClass, "getTremoloDepth", "()F"));
        jfloat tremoloDuty = env->CallFloatMethod(tremoloEffectObject, env->GetMethodID(tremoloEffectClass, "getTremoloDuty", "()F"));
        jfloat tremoloFrequency = env->CallFloatMethod(tremoloEffectObject, env->GetMethodID(tremoloEffectClass, "getTremoloFrequency", "()F"));
        jfloat tremoloPhase = env->CallFloatMethod(tremoloEffectObject, env->GetMethodID(tremoloEffectClass, "getTremoloPhase", "()F"));
        jfloat tremoloShape = env->CallFloatMethod(tremoloEffectObject, env->GetMethodID(tremoloEffectClass, "getTremoloShape", "()F"));
        jfloat tremoloSkew = env->CallFloatMethod(tremoloEffectObject, env->GetMethodID(tremoloEffectClass, "getTremoloSkew", "()F"));
        jfloat tremoloSpread = env->CallFloatMethod(tremoloEffectObject, env->GetMethodID(tremoloEffectClass, "getTremoloSpread", "()F"));
        jfloat tremoloSquare = env->CallFloatMethod(tremoloEffectObject, env->GetMethodID(tremoloEffectClass, "getTremoloSquare", "()F"));


        DSP *tremoloDsp;
        system->createDSPByType(FMOD_DSP_TYPE_TREMOLO, &tremoloDsp);
        //TREMOLO DEPTH(颤音深度)用于调整颤音的音高变化程度,当调节到最小时相当于完全关闭这个功能
        tremoloDsp->setParameterFloat(FMOD_DSP_TREMOLO_DEPTH, tremoloDepth);            /* (Type:float) - Tremolo depth.  0 to 1.  Default = 1. */
        tremoloDsp->setParameterFloat(FMOD_DSP_TREMOLO_DUTY, tremoloDuty);              /* (Type:float) - LFO on-time.  0 to 1.  Default = 0.5. */
        tremoloDsp->setParameterFloat(FMOD_DSP_TREMOLO_FREQUENCY, tremoloFrequency);    /* (Type:float) - LFO frequency in Hz.  0.1 to 20.  Default = 5. */
        tremoloDsp->setParameterFloat(FMOD_DSP_TREMOLO_PHASE, tremoloPhase);            /* (Type:float) - Instantaneous LFO phase.  0 to 1.  Default = 0. */
        tremoloDsp->setParameterFloat(FMOD_DSP_TREMOLO_SHAPE, tremoloShape);            /* (Type:float) - LFO shape morph between triangle and sine.  0 to 1.  Default = 0. */
        tremoloDsp->setParameterFloat(FMOD_DSP_TREMOLO_SKEW, tremoloSkew);              /* (Type:float) - Time-skewing of LFO cycle.  -1 to 1.  Default = 0. */
        tremoloDsp->setParameterFloat(FMOD_DSP_TREMOLO_SPREAD, tremoloSpread);          /* (Type:float) - Rotation / auto-pan effect.  -1 to 1.  Default = 0. */
        tremoloDsp->setParameterFloat(FMOD_DSP_TREMOLO_SQUARE, tremoloSquare);          /* (Type:float) - Flatness of the LFO shape.  0 to 1.  Default = 0. */
        channel->addDSP(0, tremoloDsp);


        LOGI("%s%.2f", "tremoloDepth:", tremoloDepth);
        LOGI("%s%.2f", "tremoloDuty:", tremoloDuty);
        LOGI("%s%.2f", "tremoloFrequency:", tremoloFrequency);
        LOGI("%s%.2f", "tremoloPhase:", tremoloPhase);
        LOGI("%s%.2f", "tremoloShape:", tremoloShape);
        LOGI("%s%.2f", "tremoloSkew:", tremoloSkew);
        LOGI("%s%.2f", "tremoloSpread:", tremoloSpread);
        LOGI("%s%.2f", "tremoloSquare:", tremoloSquare);
    }
}

void applyNormalizeEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getNormalizeEffect = env->GetMethodID(cls, "getNormalizeEffect", "()Lio/microshow/aisound/soundeffect/type/NormalizeEffect;");
    jobject normalizeEffectObject = env->CallObjectMethod(sound_effect_bean, getNormalizeEffect);
    if (normalizeEffectObject != NULL) {
        jclass normalizeEffectClass = env->GetObjectClass(normalizeEffectObject);

        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(normalizeEffectObject, env->GetMethodID(normalizeEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;

        //效果参数
        jfloat fadeTime = env->CallFloatMethod(normalizeEffectObject, env->GetMethodID(normalizeEffectClass, "getFadeTime", "()F"));
        jfloat threshold = env->CallFloatMethod(normalizeEffectObject, env->GetMethodID(normalizeEffectClass, "getThreshold", "()F"));
        jfloat maxAmp = env->CallFloatMethod(normalizeEffectObject, env->GetMethodID(normalizeEffectClass, "getMaxAmp", "()F"));


        DSP *normalizeDsp;
        system->createDSPByType(FMOD_DSP_TYPE_NORMALIZE, &normalizeDsp);    //放大声音
        normalizeDsp->setParameterFloat(FMOD_DSP_NORMALIZE_FADETIME, 5000.0f);  /* (Type:float) - Time to ramp the silence to full in ms.  0.0 to 20000.0. Default = 5000.0. */
        normalizeDsp->setParameterFloat(FMOD_DSP_NORMALIZE_THRESHHOLD,
                                        0.1f);    /* (Type:float) - Lower volume range threshold to ignore.  0.0 to 1.0.  Default = 0.1.  Raise higher to stop amplification of very quiet signals. */
        normalizeDsp->setParameterFloat(FMOD_DSP_NORMALIZE_MAXAMP,
                                        20.0f);       /* (Type:float) - Maximum amplification allowed.  1.0 to 100000.0.  Default = 20.0.  1.0 = no amplifaction, higher values allow more boost. */

        channel->addDSP(0, normalizeDsp);
        LOGI("%s%.2f", "fadeTime:", fadeTime);
        LOGI("%s%.2f", "threshold:", threshold);
        LOGI("%s%.2f", "maxAmp:", maxAmp);
    }
}

void applySfxReverbEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getSfxReverbEffect = env->GetMethodID(cls, "getSfxReverbEffect", "()Lio/microshow/aisound/soundeffect/type/SfxReverbEffect;");
    jobject sfxReverbEffectObject = env->CallObjectMethod(sound_effect_bean, getSfxReverbEffect);
    if (sfxReverbEffectObject != NULL) {
        jclass sfxReverbEffectClass = env->GetObjectClass(sfxReverbEffectObject);

        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jfloat decayTime = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getDecayTime", "()F"));
        jfloat earlyDelay = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getEarlyDelay", "()F"));
        jfloat lateDelay = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getLateDelay", "()F"));
        jfloat hfReference = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getHfReference", "()F"));
        jfloat hfDecayRatio = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getHfDecayRatio", "()F"));
        jfloat diffusion = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getDiffusion", "()F"));
        jfloat density = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getDensity", "()F"));
        jfloat lowShelfFrequency = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getLowShelfFrequency", "()F"));
        jfloat lowShelfGain = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getLowShelfGain", "()F"));
        jfloat highCut = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getHighCut", "()F"));
        jfloat earlyLateMix = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getEarlyLateMix", "()F"));
        jfloat wetLevel = env->CallFloatMethod(sfxReverbEffectObject, env->GetMethodID(sfxReverbEffectClass, "getWetLevel", "()F"));


        DSP *sfxReverbDsp;
        system->createDSPByType(FMOD_DSP_TYPE_SFXREVERB, &sfxReverbDsp);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_DECAYTIME, decayTime);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_EARLYDELAY, earlyDelay);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_LATEDELAY, lateDelay);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_HFREFERENCE, hfReference);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_HFDECAYRATIO, hfDecayRatio);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_DIFFUSION, diffusion);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_DENSITY, density);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_LOWSHELFFREQUENCY, lowShelfFrequency);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_LOWSHELFGAIN, lowShelfGain);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_HIGHCUT, highCut);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_EARLYLATEMIX, earlyLateMix);
        sfxReverbDsp->setParameterFloat(FMOD_DSP_SFXREVERB_WETLEVEL, wetLevel);
        channel->addDSP(0, sfxReverbDsp);


        LOGI("%s%.2f", "decayTime:", decayTime);
        LOGI("%s%.2f", "earlyDelay:", earlyDelay);
        LOGI("%s%.2f", "lateDelay:", lateDelay);
        LOGI("%s%.2f", "hfReference:", hfReference);
        LOGI("%s%.2f", "hfDecayRatio:", hfDecayRatio);
        LOGI("%s%.2f", "diffusion:", diffusion);
        LOGI("%s%.2f", "density:", density);
        LOGI("%s%.2f", "lowShelfFrequency:", lowShelfFrequency);
        LOGI("%s%.2f", "lowShelfGain:", lowShelfGain);
        LOGI("%s%.2f", "highCut:", highCut);
        LOGI("%s%.2f", "earlyLateMix:", earlyLateMix);
        LOGI("%s%.2f", "wetLevel:", wetLevel);
    }
}

void applyDistortionEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getDistortionEffect = env->GetMethodID(cls, "getDistortionEffect", "()Lio/microshow/aisound/soundeffect/type/DistortionEffect;");
    jobject distortionEffectObject = env->CallObjectMethod(sound_effect_bean, getDistortionEffect);
    if (distortionEffectObject != NULL) {
        jclass distortionEffectClass = env->GetObjectClass(distortionEffectObject);

        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(distortionEffectObject, env->GetMethodID(distortionEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jfloat distortionLevel = env->CallFloatMethod(distortionEffectObject, env->GetMethodID(distortionEffectClass, "getDistortionLevel", "()F"));

        DSP *distortionDsp;
        system->createDSPByType(FMOD_DSP_TYPE_DISTORTION, &distortionDsp);
        distortionDsp->setParameterFloat(FMOD_DSP_DISTORTION_LEVEL, distortionLevel);
        channel->addDSP(0, distortionDsp);

        LOGI("%s%.2f", "distortionLevel:", distortionLevel);
    }
}

void applyOscillatorEffectEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {
    jclass cls = env->GetObjectClass(sound_effect_bean);
    jmethodID getOscillatorEffect = env->GetMethodID(cls, "getOscillatorEffect", "()Lio/microshow/aisound/soundeffect/type/OscillatorEffect;");
    jobject oscillatorEffectObject = env->CallObjectMethod(sound_effect_bean, getOscillatorEffect);
    if (oscillatorEffectObject != NULL) {
        jclass oscillatorEffectClass = env->GetObjectClass(oscillatorEffectObject);

        //是否启用该变声效果
        jboolean isEffectActive = env->CallBooleanMethod(oscillatorEffectObject, env->GetMethodID(oscillatorEffectClass, "isActive", "()Z"));
        if (!isEffectActive) return;


        //效果参数
        jint oscillatorType = env->CallIntMethod(oscillatorEffectObject, env->GetMethodID(oscillatorEffectClass, "getOscillatorType", "()I"));
        jfloat oscillatorRate = env->CallFloatMethod(oscillatorEffectObject, env->GetMethodID(oscillatorEffectClass, "getOscillatorRate", "()F"));

        DSP *oscillatorDsp;
        system->createDSPByType(FMOD_DSP_TYPE_OSCILLATOR, &oscillatorDsp);
        oscillatorDsp->setParameterInt(FMOD_DSP_OSCILLATOR_TYPE, oscillatorType);
        oscillatorDsp->setParameterFloat(FMOD_DSP_OSCILLATOR_RATE, oscillatorRate);
        channel->addDSP(0, oscillatorDsp);

        LOGI("%s%d", "oscillatorType:", oscillatorType);
        LOGI("%s%.2f", "oscillatorRate:", oscillatorRate);
    }
}


void applyAllSoundEffect(JNIEnv *env, jobject sound_effect_bean, System *system, Channel *channel) {

    applyChorusSoundEffect(env, sound_effect_bean, system, channel);
    applyEchoEffect(env, sound_effect_bean, system, channel);
    applyEqEffect(env, sound_effect_bean, system, channel);
    applyFlangeEffect(env, sound_effect_bean, system, channel);
    applyFrequencyEffect(env, sound_effect_bean, system, channel);
    applyPitchEffect(env, sound_effect_bean, system, channel);
    applyTremoloEffect(env, sound_effect_bean, system, channel);
    applyNormalizeEffect(env, sound_effect_bean, system, channel);
    applySfxReverbEffect(env, sound_effect_bean, system, channel);
    applyDistortionEffect(env, sound_effect_bean, system, channel);
    applyOscillatorEffectEffect(env, sound_effect_bean, system, channel);
}

/**
 * ****************************************************** fmod 参数调节  End ***********************************************************
 */






Sound *mainSound, *subSound;
Channel *mainChannel, *subChannel;
bool loopMode;

jobject soundEffect;

extern "C"
JNIEXPORT void JNICALL
Java_io_microshow_aisound_AiSound_playMixSound
        (JNIEnv *env, jclass clazz, jstring main_audio, jstring sub_audio, jfloat main_audio_volume, jfloat sub_audio_volume, jobject sound_effect, jboolean isLoop, jobject callBack) {

    //声明
    System *system;
    bool isPlaying = true;
    bool isPause = false;

    //system对象初始化
    System_Create(&system);
    system->init(32, FMOD_INIT_NORMAL, NULL);


    //C/C++ 对java 传过来的String类型要做转换处理后才能使用
    const char *cstr_main_audio = env->GetStringUTFChars(main_audio, NULL);
    const char *cstr_sub_audio = env->GetStringUTFChars(sub_audio, NULL);


    //当前播放位置,单位ms
    unsigned int playPosition;

    //实际播放时长,单位ms
    jlong actualPlayTime = 0l;


    //是否循环播放
    loopMode = isLoop;

    //C++层调用java层方法,即上层回调
    jclass cls = env->GetObjectClass(callBack);
    jmethodID onStartMethod = env->GetMethodID(cls, "onStart", "()V");
    jmethodID onTimeMethod = env->GetMethodID(cls, "onTime", "(JJ)V");
    jmethodID onCompleteMethod = env->GetMethodID(cls, "onComplete", "()V");
    jmethodID onErrorMethod = env->GetMethodID(cls, "onError", "(Ljava/lang/String;)V");


    try {
        //主音频处理
        system->createSound(cstr_main_audio, loopMode ? FMOD_LOOP_NORMAL : FMOD_DEFAULT, NULL, &mainSound);
        system->playSound(mainSound, 0, false, &mainChannel);
        mainChannel->setVolume(main_audio_volume);


        //添加变声效果
        soundEffect = sound_effect;//记录下首次变声参数
        applyAllSoundEffect(env, sound_effect, system, mainChannel);


        //副音频处理,如果传进来的是空字符串,表示没有添加副音频
        if (env->GetStringLength(sub_audio) != 0) {
            //LOGE("%s", "处理副音频即背景音效")
            system->createSound(cstr_sub_audio, FMOD_LOOP_NORMAL, NULL, &subSound);//subSound是背景音乐,让其一直播放直至MainSound播放完成
            system->playSound(subSound, 0, false, &subChannel);
            subChannel->setVolume(sub_audio_volume);
        }


        //开始正式播放前通知上层
        env->CallVoidMethod(callBack, onStartMethod);

    } catch (...) {
        jstring data = env->NewStringUTF("混音播放异常");
        env->CallVoidMethod(callBack, onErrorMethod, data);
        goto end;
    }


    //Note 这里是每隔100ms检查当前状态
    while (isPlaying) {
        usleep(1000 * 100);//注意:这里的参数单位是微秒,1s=10^6微秒
        mainChannel->isPlaying(&isPlaying);


        //通知上层当前播放进度,单位ms,
        mainChannel->getPaused(&isPause);
        if (!isPause) {

            //获取相对播放时间
            mainChannel->getPosition(&playPosition, FMOD_TIMEUNIT_MS);
            jlong relativePlayTime = playPosition;//注意该播放进度是指当前在原声轨上的进度,并不是实际的播放时长进度

            //手动计算实际播放时长,单位ms
            actualPlayTime = actualPlayTime + 100;

            //通知间隔时间为1s,即整秒数发出通知
            if (actualPlayTime % 1000 == 0) {
                env->CallVoidMethod(callBack, onTimeMethod, relativePlayTime, actualPlayTime);
            }
        }

        //判断播放过程中监听上层是否更改循环播放设置
        if (loopMode != isLoop) {
            isLoop = static_cast<jboolean>(loopMode);
            mainChannel->setMode(loopMode ? FMOD_LOOP_NORMAL : FMOD_LOOP_OFF);
        }
    }


    //播放完毕,调用回调函数
    env->CallVoidMethod(callBack, onCompleteMethod);

    goto end;
    end:


    //释放资源处理
    env->ReleaseStringUTFChars(main_audio, cstr_main_audio);
    env->ReleaseStringUTFChars(sub_audio, cstr_sub_audio);
    mainSound->release();
    subSound->release();
    system->close();
    system->release();
}


extern "C"
JNIEXPORT jint JNICALL
Java_io_microshow_aisound_AiSound_saveMixSound
        (JNIEnv *env, jclass clazz, jstring main_audio, jstring sub_audio, jstring output_audio, jfloat main_audio_volume, jfloat sub_audio_volume, jobject sound_effect) {

    //GetStringUTFChars可以把一个jstring指针(指向JVM内部的Unicode字符序列)转化成一个UTF-8格式的C字符串
    const char *cstr_main_audio = env->GetStringUTFChars(main_audio, NULL);            //主音频路径
    const char *cstr_sub_audio = env->GetStringUTFChars(sub_audio, NULL);              //副音频路径
    const char *cstr_output_audio = env->GetStringUTFChars(output_audio, NULL);        //上层指定的输出音频路径


    //声明
    System *system;
    Sound *mainSound, *subSound;
    Channel *mainChannel, *subChannel;
    bool isMainSoundPlaying = true;
    jint result = 1;


    //初始化
    System_Create(&system);
    //设置采样率为48000,单声道,注意:参数SampleRate 采样率很大程度上影响音频质量,建议与录音时选用的采样率保持一致
    system->setSoftwareFormat(48000, FMOD_SPEAKERMODE_MONO, 0);

    char cDest[200];
    strcpy(cDest, cstr_output_audio);
    system->setOutput(FMOD_OUTPUTTYPE_WAVWRITER);                        //保存文件格式为WAV
    system->init(32, FMOD_INIT_NORMAL | FMOD_INIT_PROFILE_ENABLE, cDest);//保存合成效果的核心代码

    try {
        //主音频处理
        system->createSound(cstr_main_audio, FMOD_DEFAULT, NULL, &mainSound);
        system->playSound(mainSound, 0, false, &mainChannel);
        mainChannel->setVolume(main_audio_volume);

        //添加变声效果
        applyAllSoundEffect(env, sound_effect, system, mainChannel);


        //副音频处理
        if (env->GetStringLength(sub_audio) != 0) {
            LOGI("%s", "处理副音频即背景音效")
            system->createSound(cstr_sub_audio, FMOD_LOOP_NORMAL, NULL, &subSound);//subSound是背景音乐,让其一直播放直至MainSound播放完成
            system->playSound(subSound, 0, false, &subChannel);
            subChannel->setVolume(sub_audio_volume);
        }


    } catch (...) {
        LOGE("%s", "保存变声效果异常")
        result = 0;
    }


    try {
        system->update();
        while (isMainSoundPlaying) {
            mainChannel->isPlaying(&isMainSoundPlaying);
            usleep(1000);
        }
        //释放资源处理
        env->ReleaseStringUTFChars(main_audio, cstr_main_audio);
        env->ReleaseStringUTFChars(sub_audio, cstr_sub_audio);
        env->ReleaseStringUTFChars(output_audio, cstr_output_audio);

        mainSound->release();
        subSound->release();
        system->close();
        system->release();
    } catch (...) {
        LOGE("%s", "保存变声效果异常")
        result = 0;
    }

    return result;
}



extern "C"
JNIEXPORT void JNICALL
Java_io_microshow_aisound_AiSound_setMixSoundMainVolume(JNIEnv *env, jclass clazz, jfloat volume) {
    LOGI("%s", "--> AiSound_setMixSoundMainVolume");
    mainChannel->setVolume(volume);
}


extern "C"
JNIEXPORT void JNICALL
Java_io_microshow_aisound_AiSound_setMixSoundSubVolume(JNIEnv *env, jclass clazz, jfloat volume) {
    LOGI("%s", "--> AiSound_setMixSoundSubVolume");
    subChannel->setVolume(volume);
}


extern "C"
JNIEXPORT void JNICALL
Java_io_microshow_aisound_AiSound_pauseMixSound(JNIEnv *env, jclass clazz) {
    LOGI("%s", "--> AiSound_pauseMixSound");
    mainChannel->setPaused(true);
    subChannel->setPaused(true);

}


extern "C"
JNIEXPORT void JNICALL
Java_io_microshow_aisound_AiSound_resumeMixSound(JNIEnv *env, jclass clazz) {
    LOGI("%s", "--> AiSound_resumeMixSound");
    mainChannel->setPaused(false);
    subChannel->setPaused(false);
}


extern "C"
JNIEXPORT jboolean JNICALL
Java_io_microshow_aisound_AiSound_isMixPlay(JNIEnv *env, jclass clazz) {
    bool isPlaying = true;
    return !mainChannel->isPlaying(&isPlaying);
}


extern "C"
JNIEXPORT void JNICALL
Java_io_microshow_aisound_AiSound_stopMixSound(JNIEnv *env, jclass clazz) {
    LOGI("%s", "--> AiSound_stopMixSound");
    mainChannel->stop();
}

extern "C"
JNIEXPORT void JNICALL
Java_io_microshow_aisound_AiSound_changeLoopMode(JNIEnv *env, jclass clazz, jboolean is_loop) {
    loopMode = is_loop;
}




extern "C"
JNIEXPORT void JNICALL
Java_io_microshow_aisound_AiSound_changeSoundEffectWhilePlaying(JNIEnv *env, jclass clazz, jobject sound_effect) {
    soundEffect = sound_effect;
}