Android 匿名内存解析

832 阅读6分钟

Android 匿名内存解析

有了binder机制为什么还需要匿名内存来实现IPC呢?我觉得很大的原因就是binder传输是有大小限制的,不说应用层的限制。在驱动中binder的传输大小被限制在了4M,分享一张图片可能就超过了这个限制。匿名内存的主要解决思路就是通过binder传输文件描述符,使得两个进程都能访问同一个地址来实现共享。

MemoryFile使用

在平常开发中android提供了MemoryFile来实现匿名内存。看下最简单的实现。

Service端
const val GET_ASH_MEMORY = 1000
class MyService : Service() {
    val ashData = "AshDemo".toByteArray()
    override fun onBind(intent: Intent): IBinder {
        return object : Binder() {
            override fun onTransact(code: Int, data: Parcel, reply: Parcel?, flags: Int): Boolean {
                when(code){
                    GET_ASH_MEMORY->{//收到客户端请求的时候会烦
                        val descriptor = createMemoryFile()
                        reply?.writeParcelable(descriptor, 0)
                        reply?.writeInt(ashData.size)
                        return true
                    }
                    else->{
                        return super.onTransact(code, data, reply, flags)
                    }
                }
            }
        }
    }
    private fun createMemoryFile(): ParcelFileDescriptor? {
        val file = MemoryFile("AshFile", 1024)//创建MemoryFile
        val descriptorMethod = file.javaClass.getDeclaredMethod("getFileDescriptor")
        val fd=descriptorMethod.invoke(file)//反射拿到fd
        file.writeBytes(ashData, 0, 0,ashData.size)//写入字符串
        return ParcelFileDescriptor.dup(fd as FileDescriptor?)//返回一个封装的fd
    }
}

Server的功能很简单收到GET_ASH_MEMORY请求的时候创建一个MemoryFile,往里写入一个字符串的byte数组,然后将fd和字符长度写入reply中返回给客户端。

Client端
class MainActivity : AppCompatActivity() {
    val connect = object :ServiceConnection{
        override fun onServiceConnected(name: ComponentName?, service: IBinder?) {
            val reply = Parcel.obtain()
            val sendData = Parcel.obtain()
            service?.transact(GET_ASH_MEMORY, sendData, reply, 0)//传输信号GET_ASH_MEMORY
            val pfd = reply.readParcelable<ParcelFileDescriptor>(javaClass.classLoader)
            val descriptor = pfd?.fileDescriptor//拿到fd
            val size = reply.readInt()//拿到长度
            val input = FileInputStream(descriptor)
            val bytes = input.readBytes()
            val message = String(bytes, 0, size, Charsets.UTF_8)//生成string
            Toast.makeText(this@MainActivity,message,Toast.LENGTH_SHORT).show()
        }
​
        override fun onServiceDisconnected(name: ComponentName?) {
        }
​
    }
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        findViewById<TextView>(R.id.intent).setOnClickListener {
          //启动服务
            bindService(Intent(this,MyService::class.java),connect, Context.BIND_AUTO_CREATE)
        }
    }
}

客户端也很简单,启动服务,发送一个获取MemoryFile的请求,然后通过reply拿到fd和长度,用FileInputStream读取fd中的内容,最后通过toast可以验证这个message已经拿到了。

AshMemory 创建原理

    public MemoryFile(String name, int length) throws IOException {
        try {
            mSharedMemory = SharedMemory.create(name, length);
            mMapping = mSharedMemory.mapReadWrite();
        } catch (ErrnoException ex) {
            ex.rethrowAsIOException();
        }
    }

MemoryFile就是对SharedMemory的一层封装,具体的工能都是SharedMemory实现的。看SharedMemory的实现。

    public static @NonNull SharedMemory create(@Nullable String name, int size)
            throws ErrnoException {
        if (size <= 0) {
            throw new IllegalArgumentException("Size must be greater than zero");
        }
        return new SharedMemory(nCreate(name, size));
    }
  private static native FileDescriptor nCreate(String name, int size) throws ErrnoException;

通过一个JNI获得fd,从这里可以推断出java层也只是一个封装,拿到的已经是创建好的fd。

//frameworks/base/core/jni/android_os_SharedMemory.cpp
jobject SharedMemory_nCreate(JNIEnv* env, jobject, jstring jname, jint size) {
    const char* name = jname ? env->GetStringUTFChars(jname, nullptr) : nullptr;
    int fd = ashmem_create_region(name, size);//创建匿名内存块
    int err = fd < 0 ? errno : 0;
    if (name) {
        env->ReleaseStringUTFChars(jname, name);
    }
    if (fd < 0) {
        jniThrowErrnoException(env, "SharedMemory_create", err);
        return nullptr;
    }
    jobject jifd = jniCreateFileDescriptor(env, fd);//创建java fd返回
    if (jifd == nullptr) {
        close(fd);
    }
    return jifd;
}

通过cutils中的ashmem_create_region函数实现的创建

//system/core/libcutils/ashmem-dev.cpp
int ashmem_create_region(const char *name, size_t size)
{
    int ret, save_errno;
​
    if (has_memfd_support()) {//老版本兼容用
        return memfd_create_region(name ? name : "none", size);
    }
​
    int fd = __ashmem_open();//打开Ashmem驱动
    if (fd < 0) {
        return fd;
    }
    if (name) {
        char buf[ASHMEM_NAME_LEN] = {0};
        strlcpy(buf, name, sizeof(buf));
        ret = TEMP_FAILURE_RETRY(ioctl(fd, ASHMEM_SET_NAME, buf));//通过ioctl设置名字
        if (ret < 0) {
            goto error;
        }
    }
    ret = TEMP_FAILURE_RETRY(ioctl(fd, ASHMEM_SET_SIZE, size));//通过ioctl设置大小
    if (ret < 0) {
        goto error;
    }
    return fd;
error:
    save_errno = errno;
    close(fd);
    errno = save_errno;
    return ret;
}
​

标准的驱动交互操作

1.open打开驱动

2.通过ioctl与驱动进行交互

下面看下open的流程

static int __ashmem_open()
{
    int fd;
​
    pthread_mutex_lock(&__ashmem_lock);
    fd = __ashmem_open_locked();
    pthread_mutex_unlock(&__ashmem_lock);
​
    return fd;
}
​
/* logistics of getting file descriptor for ashmem */
static int __ashmem_open_locked()
{
    static const std::string ashmem_device_path = get_ashmem_device_path();//拿到Ashmem驱动路径
    if (ashmem_device_path.empty()) {
        return -1;
    }
    int fd = TEMP_FAILURE_RETRY(open(ashmem_device_path.c_str(), O_RDWR | O_CLOEXEC));
    return fd;
}

回到MemoryFile的构造函数中,拿到了驱动的fd之后调用了mapReadWrite

    public @NonNull ByteBuffer mapReadWrite() throws ErrnoException {
        return map(OsConstants.PROT_READ | OsConstants.PROT_WRITE, 0, mSize);
    }
 public @NonNull ByteBuffer map(int prot, int offset, int length) throws ErrnoException {
        checkOpen();
        validateProt(prot);
        if (offset < 0) {
            throw new IllegalArgumentException("Offset must be >= 0");
        }
        if (length <= 0) {
            throw new IllegalArgumentException("Length must be > 0");
        }
        if (offset + length > mSize) {
            throw new IllegalArgumentException("offset + length must not exceed getSize()");
        }
        long address = Os.mmap(0, length, prot, OsConstants.MAP_SHARED, mFileDescriptor, offset);//调用了系统的mmap
        boolean readOnly = (prot & OsConstants.PROT_WRITE) == 0;
        Runnable unmapper = new Unmapper(address, length, mMemoryRegistration.acquire());
        return new DirectByteBuffer(length, address, mFileDescriptor, unmapper, readOnly);
    }
​

到这里就有一个疑问,Linux就有共享内存,android为什么要自己搞一套,只能看下Ashmemory驱动的实现了。

驱动第一步看init和file_operations

static int __init ashmem_init(void)
{
    int ret = -ENOMEM;
​
    ashmem_area_cachep = kmem_cache_create("ashmem_area_cache",
                           sizeof(struct ashmem_area),
                           0, 0, NULL);//创建
    if (!ashmem_area_cachep) {
        pr_err("failed to create slab cache\n");
        goto out;
    }
​
    ashmem_range_cachep = kmem_cache_create("ashmem_range_cache",
                        sizeof(struct ashmem_range),
                        0, SLAB_RECLAIM_ACCOUNT, NULL);//创建
    if (!ashmem_range_cachep) {
        pr_err("failed to create slab cache\n");
        goto out_free1;
    }
​
    ret = misc_register(&ashmem_misc);//注册为了一个misc设备
    ........
    return ret;
}

创建了两个内存分配器ashmem_area_cachep和ashmem_range_cachep用于分配ashmem_area和ashmem_range

//common/drivers/staging/android/ashmem.c
static const struct file_operations ashmem_fops = {
    .owner = THIS_MODULE,
    .open = ashmem_open,
    .release = ashmem_release,
    .read_iter = ashmem_read_iter,
    .llseek = ashmem_llseek,
    .mmap = ashmem_mmap,
    .unlocked_ioctl = ashmem_ioctl,
#ifdef CONFIG_COMPAT
    .compat_ioctl = compat_ashmem_ioctl,
#endif
#ifdef CONFIG_PROC_FS
    .show_fdinfo = ashmem_show_fdinfo,
#endif
};

open调用的就是ashmem_open

static int ashmem_open(struct inode *inode, struct file *file)
{
    struct ashmem_area *asma;
    int ret;ret = generic_file_open(inode, file);
    if (ret)
        return ret;asma = kmem_cache_zalloc(ashmem_area_cachep, GFP_KERNEL);//分配一个ashmem_area
    if (!asma)
        return -ENOMEM;
​
    INIT_LIST_HEAD(&asma->unpinned_list);//初始化unpinned_list
    memcpy(asma->name, ASHMEM_NAME_PREFIX, ASHMEM_NAME_PREFIX_LEN);//初始化一个名字
    asma->prot_mask = PROT_MASK;
    file->private_data = asma;
    return 0;
}

ioctl设置名字和长度调用的就是ashmem_ioctl

static long ashmem_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
    struct ashmem_area *asma = file->private_data;
    long ret = -ENOTTY;
​
    switch (cmd) {
    case ASHMEM_SET_NAME:
        ret = set_name(asma, (void __user *)arg);
        break;
    case ASHMEM_SET_SIZE:
        ret = -EINVAL;
        mutex_lock(&ashmem_mutex);
        if (!asma->file) {
            ret = 0;
            asma->size = (size_t)arg;
        }
        mutex_unlock(&ashmem_mutex);
        break;
    }
  ........
  }

实现也都很简单就是改变了一下asma里的值。接下来就是重点mmap了,具体是怎么分配内存的。

​
static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
{
    static struct file_operations vmfile_fops;
    struct ashmem_area *asma = file->private_data;
    int ret = 0;
​
    mutex_lock(&ashmem_mutex);
​
    /* user needs to SET_SIZE before mapping */
    if (!asma->size) {//判断设置了size
        ret = -EINVAL;
        goto out;
    }
​
    /* requested mapping size larger than object size */
    if (vma->vm_end - vma->vm_start > PAGE_ALIGN(asma->size)) {//判断大小是否超过了虚拟内存
        ret = -EINVAL;
        goto out;
    }
​
    /* requested protection bits must match our allowed protection mask */
    if ((vma->vm_flags & ~calc_vm_prot_bits(asma->prot_mask, 0)) &
        calc_vm_prot_bits(PROT_MASK, 0)) {//权限判断
        ret = -EPERM;
        goto out;
    }
    vma->vm_flags &= ~calc_vm_may_flags(~asma->prot_mask);
​
    if (!asma->file) {//是否创建过临时文件,没创建过进入
        char *name = ASHMEM_NAME_DEF;
        struct file *vmfile;
        struct inode *inode;
​
        if (asma->name[ASHMEM_NAME_PREFIX_LEN] != '\0')
            name = asma->name;
​
        /* ... and allocate the backing shmem file */
        vmfile = shmem_file_setup(name, asma->size, vma->vm_flags);//调用linux函数在tmpfs中创建临时文件
        if (IS_ERR(vmfile)) {
            ret = PTR_ERR(vmfile);
            goto out;
        }
        vmfile->f_mode |= FMODE_LSEEK;
        inode = file_inode(vmfile);
        lockdep_set_class(&inode->i_rwsem, &backing_shmem_inode_class);
        asma->file = vmfile;
        /*
         * override mmap operation of the vmfile so that it can't be
         * remapped which would lead to creation of a new vma with no
         * asma permission checks. Have to override get_unmapped_area
         * as well to prevent VM_BUG_ON check for f_ops modification.
         */
        if (!vmfile_fops.mmap) {//设置了临时文件的文件操作,防止有其他程序mmap这个临时文件
            vmfile_fops = *vmfile->f_op;
            vmfile_fops.mmap = ashmem_vmfile_mmap;
            vmfile_fops.get_unmapped_area =
                    ashmem_vmfile_get_unmapped_area;
        }
        vmfile->f_op = &vmfile_fops;
    }
    get_file(asma->file);
​
    /*
     * XXX - Reworked to use shmem_zero_setup() instead of
     * shmem_set_file while we're in staging. -jstultz
     */
    if (vma->vm_flags & VM_SHARED) {//这块内存是不是需要跨进程
        ret = shmem_zero_setup(vma);//设置文件
        if (ret) {
            fput(asma->file);
            goto out;
        }
    } else {
    /**
    实现就是把vm_ops设置为NULL
    static inline void vma_set_anonymous(struct vm_area_struct *vma)
        {
            vma->vm_ops = NULL;
        }
    */
        vma_set_anonymous(vma);
    }
​
    vma_set_file(vma, asma->file);
    /* XXX: merge this with the get_file() above if possible */
    fput(asma->file);
​
out:
    mutex_unlock(&ashmem_mutex);
    return ret;
}

函数很长,但是思路还是很清晰的。创建临时文件,设置文件操作。其中调用的都是linux的系统函数了,看真正设置的shmem_zero_setup函数

int shmem_zero_setup(struct vm_area_struct *vma)
{
    struct file *file;
    loff_t size = vma->vm_end - vma->vm_start;
​
    /*
     * Cloning a new file under mmap_lock leads to a lock ordering conflict
     * between XFS directory reading and selinux: since this file is only
     * accessible to the user through its mapping, use S_PRIVATE flag to
     * bypass file security, in the same way as shmem_kernel_file_setup().
     */
    file = shmem_kernel_file_setup("dev/zero", size, vma->vm_flags);
    if (IS_ERR(file))
        return PTR_ERR(file);
​
    if (vma->vm_file)
        fput(vma->vm_file);
    vma->vm_file = file;
    vma->vm_ops = &shmem_vm_ops;//很重要的操作将这块虚拟内存的vm_ops设置为shmem_vm_ops
​
    if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
            ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) <
            (vma->vm_end & HPAGE_PMD_MASK)) {
        khugepaged_enter(vma, vma->vm_flags);
    }
​
    return 0;
}
static const struct vm_operations_struct shmem_vm_ops = {
    .fault      = shmem_fault,//Linux的共享内存实现的基础
    .map_pages  = filemap_map_pages,
#ifdef CONFIG_NUMA
    .set_policy     = shmem_set_policy,
    .get_policy     = shmem_get_policy,
#endif
};

到这里共享内存的初始化就结束了。

AshMemory 读写

//frameworks/base/core/java/android/os/MemoryFile.java
public void writeBytes(byte[] buffer, int srcOffset, int destOffset, int count)
            throws IOException {
        beginAccess();
        try {
            mMapping.position(destOffset);
            mMapping.put(buffer, srcOffset, count);
        } finally {
            endAccess();
        }
    }
    private void beginAccess() throws IOException {
        checkActive();
        if (mAllowPurging) {
            if (native_pin(mSharedMemory.getFileDescriptor(), true)) {
                throw new IOException("MemoryFile has been purged");
            }
        }
    }
​
    private void endAccess() throws IOException {
        if (mAllowPurging) {
            native_pin(mSharedMemory.getFileDescriptor(), false);
        }
    }

其中beginAccess和endAccess是对应的。调用的都是native_pin是一个native函数,一个参数是true一个是false。pin的作用就是锁住这块内存不被系统回收,当不使用的时候就解锁。

static jboolean android_os_MemoryFile_pin(JNIEnv* env, jobject clazz, jobject fileDescriptor,
        jboolean pin) {
    int fd = jniGetFDFromFileDescriptor(env, fileDescriptor);
    int result = (pin ? ashmem_pin_region(fd, 0, 0) : ashmem_unpin_region(fd, 0, 0));
    if (result < 0) {
        jniThrowException(env, "java/io/IOException", NULL);
    }
    return result == ASHMEM_WAS_PURGED;
}

调用的ashmem_pin_region和ashmem_unpin_region来实现解锁和解锁。实现还是在ashmem-dev.cpp

//system/core/libcutils/ashmem-dev.cpp
int ashmem_pin_region(int fd, size_t offset, size_t len)
{
    .......
    ashmem_pin pin = { static_cast<uint32_t>(offset), static_cast<uint32_t>(len) };
    return __ashmem_check_failure(fd, TEMP_FAILURE_RETRY(ioctl(fd, ASHMEM_PIN, &pin)));
}

通过的也是ioclt通知的驱动。加锁的细节就不展开了。具体的写入就是利用linux的共享内存机制实现的共享。

Linux共享机制简介

共享简单的实现方式就是通过mmap同一个文件来实现。但是真实文件的读写速度实在是太慢了,所以利用tmpfs这个虚拟文件系统,创建了一个虚拟文件来读写。同时这块虚拟内存在上面也写到重写了vm_ops。当有进程操作这个虚拟内存的时候会触发缺页错误,接着会去查找Page缓存,由于是第一次所以没有缓存,读取物理内存,同时加入Page缓存,当第二个进程进来的时也触发缺页错误时就能找到Page缓存了,那么他们操作的就是同一块物理内存了。

小节

看完之后发现AshMemory是基于Linux的共享内存实现的。做了几点改造

  1. 首先把一整块内存变成了一个个region,这样在不用的时候可以解锁来让系统回收。
  2. 将Linux共享内存的整数标记共享内存,而AshMemory是用的fd,让它可以利用binder机制的fd传输。
  3. 读写设置都做了加锁的处理,减少了用户使用的难度。