Android ART 内存分配浅析

1,381 阅读40分钟

为什么要阅读art 虚拟机的源码

  • 了解虚拟机运行时堆的概念、对运行时Android进程内存分布有更直观的了解
  • 分析虚拟机是怎么设计堆内存空间的
  • 了解虚拟机中的api,为性能监控寻找hook点

dalvik虚拟机和art虚拟机的关系

dalvik和art虚拟机是完全独立的两个虚拟机实现,art虚拟机解释执行采用的字节码也是dalivk字节码(实现兼容)。dalvik虚拟机源码编译的产物为libdvm.so ,art虚拟机源码编译的产物为libart.so。在5.0以上的版本的手机只采用art虚拟机,本文分析的是art虚拟机下的源码实现。

gc源代码结构概要

基于Android 11

image.png

  • accounting (一些用于辅助记录对象修改情况的数据结构)
    • bitmap.h/.cc
    • card_table.h/.cc
    • heap_bitmap.h/heap_bitmap.cc
    • mod_union_table.h/.cc
    • remembered_set.h/.cc
    • space_bitmap.h/.cc
  • allocator (具体内存分配的实现,目前只包含了 dlmalloc和 rosalloc两种实现)
    • dlmalloc.h/.cc
    • rosalloc.h/.cc
  • collector (垃圾回收器的实现)
    • gc_type.h
    • imumune_region.h/.cc
    • imumune_space.h/.cc
    • semi_space.h/.cc
    • garbage_collector.h/.cc
    • concurrent_coping.h/.cc
    • mark_sweep.h/.cc
    • partial_mark_sweep.h/.cc
    • sticky_mark_sweep.h/.cc
  • space (heap进行内存分配时实际的内存资源来源)
    • space.h/.cc
    • region_space.h/.cc
    • malloc_space.h/.cc
    • rosalloc_space.h/.cc
    • dlmalloc_space.h/.cc
    • zygote_space.h/.cc
    • image.space.h/.cc
    • bump_pointer_space.h/.cc
    • large_object_space.h/.cc
  • heap.h/.cc
  • allocation_record.h/.cc
  • reference_queue.h/.cc
  • reference_processor.h/.cc
  • ....
  • ....

accounting中主要是实现一些帮助gc分析的数据结构。 allocator是两个内存分配算法dlmalloc和 rosalloc的具体实现。 collector目录中是art虚拟机中的垃圾回收器的具体实现,垃圾收集方式从体系上主要分为MarkSweep和CopySweep两种方式,根据不同的场景(前后台状态、标记-清除还是复制-清除)会选择不同的具体gc实现、并且会有多种优化的变体。 space目录是art虚拟机分配内存的资源来源,heap中提供的内存分配功能实际上都是从Space中分配的。 heap类是对JVM虚拟机运行时堆这个概念的具体实现,作为“虚拟机堆系统”的门面类对外提供对象内存分配和垃圾回收的功能。

本文主要分析gc中Space部分的代码,先了解虚拟机是如何提供内存资源的。

Space类体系结构

在 Android Runtime 中,Java 层的堆内存主要通过 Space 相关类进行管理。理解为管理 Java 对象的容器更为贴切。每一个 Space 对应一块连续或多块不连续的内存空间,为对象分配内存时都是从Space中申请的内存资源分配的。

Space基类 定义了Space的 一些基本属性,比如name、gcRetentionPolicy等。

GcRetentionPolicy 表示什么情况会对Space中分配的内存进行垃圾回收 image.png

  • kGcRetentionPolicyNeverCollect  表示Space空间不会进行GC
  • kGcRetentionPolicyAlwaysCollect 表示每次GC触发时都尝试进行垃圾回收
  • kGcRetentionPolicyFullCollect   表示只在FullGC 的情况下进行垃圾回收

ContinuousSpace 和DiscontinusSpace 基类定义该Space所申请的内存空间是否是连续的。 对于使用连续内存空间的Space 可以通过 Begin()和End()函数获取该Space 申请的内存起始和终止地址,Limit()表示该Space总的所能使用的内存大小。 DiscontinusSpace 所使用的内存空间是不连续的,因此只能采用标记-清除来回收垃圾, getLiveBitmap()和getMarkBitmap() 分别返回LiveBitmap和MarkBitmap,它们用来记录上次GC存活和本次GC存活的对象,协助GC工作。

MemMapSpace

MemMapSpace 内部使用MemMap来申请内存,MemMap是Android的一个辅助工具类,其封装了对memory map 相关的操作,底层是通过 mmap申请内存

ImageSpace

mmap可以分为内存映射和文件映射,而ImageSpace 内部采用mmap文件映射来加载资源。ImageSpace在虚拟机中用于加载内存镜像文件如(boot.art、boot.oat)。

MallocSpace

MallocSpace是DlMallocSpace和RosAllocSpace的通用基类,正常情况下为Java对象分配内存都是由MallocSpace的具体实现类(DlMallocSpace或RosAllocSpace) 提供的。

LargeObjectSpace

 // Primitive arrays larger than this size are put in the large object space.
  static constexpr size_t kMinLargeObjectThreshold = 3 * kPageSize;
  static constexpr size_t kDefaultLargeObjectThreshold = kMinLargeObjectThreshold;

inline bool Heap::ShouldAllocLargeObject(ObjPtr<mirror::Class> c, size_t byte_count) const {
  // We need to have a zygote space or else our newly allocated large object can end up in the
  // Zygote resulting in it being prematurely freed.
  // We can only do this for primitive objects since large objects will not be within the card table
  // range. This also means that we rely on SetClass not dirtying the object's card.
  return byte_count >= large_object_threshold_ && (c->IsPrimitiveArray() || c->IsStringClass());
}

LargeObject的定义是 超出3个内存页大小的 Java原生类型数组及String对象。对LargeObjectSpace的两个具体实现类分别为  LargeObjectMapSpace、**FreeListSpace ,**虚拟机在x86平台上默认使用 FreeListSpace 在 arm平台上使用 LargeObjectMapSpace

LargeObjectMapSpace

LargeObjectMapSpace为对象分配内存时直接使用mmap分配内存,并使用一个map保存分配的对象地址与MemMap的映射关系。

mirror::Object* LargeObjectMapSpace::Alloc(Thread* self, size_t num_bytes,
                                           size_t* bytes_allocated, size_t* usable_size,
                                           size_t* bytes_tl_bulk_allocated) {
  std::string error_msg;
   //使用mmap申请内存
  MemMap mem_map = MemMap::MapAnonymous("large object space allocation",
                                        num_bytes,
                                        PROT_READ | PROT_WRITE,
                                        /*low_4gb=*/ true,
                                        &error_msg);
  if (UNLIKELY(!mem_map.IsValid())) {
    LOG(WARNING) << "Large object allocation failed: " << error_msg;
    return nullptr;
  }
  mirror::Object* const obj = reinterpret_cast<mirror::Object*>(mem_map.Begin());
  const size_t allocation_size = mem_map.BaseSize();
  MutexLock mu(self, lock_);
  large_objects_.Put(obj, LargeObject {std::move(mem_map), false /* not zygote */});
  DCHECK(bytes_allocated != nullptr);

  if (begin_ == nullptr || begin_ > reinterpret_cast<uint8_t*>(obj)) {
    begin_ = reinterpret_cast<uint8_t*>(obj);
  }
  end_ = std::max(end_, reinterpret_cast<uint8_t*>(obj) + allocation_size);

  *bytes_allocated = allocation_size;
  if (usable_size != nullptr) {
    *usable_size = allocation_size;
  }
  //更新 num_bytes_allocated_等字段
  DCHECK(bytes_tl_bulk_allocated != nullptr);
  *bytes_tl_bulk_allocated = allocation_size;
  num_bytes_allocated_ += allocation_size;
  total_bytes_allocated_ += allocation_size;
  ++num_objects_allocated_;
  ++total_objects_allocated_;
  return obj;
}

当需要释放对象所占的内存时根据对象指针的地址找到对应的MemMap并从large_objects中移除

size_t LargeObjectMapSpace::Free(Thread* self, mirror::Object* ptr) {
  MutexLock mu(self, lock_);
   //找到对应的
  auto it = large_objects_.find(ptr);
  if (UNLIKELY(it == large_objects_.end())) {
    ScopedObjectAccess soa(self);
    Runtime::Current()->GetHeap()->DumpSpaces(LOG_STREAM(FATAL_WITHOUT_ABORT));
    LOG(FATAL) << "Attempted to free large object " << ptr << " which was not live";
  }
  const size_t map_size = it->second.mem_map.BaseSize();
  DCHECK_GE(num_bytes_allocated_, map_size);
  size_t allocation_size = map_size;
  num_bytes_allocated_ -= allocation_size;
  --num_objects_allocated_;
  large_objects_.erase(it);
  return allocation_size;
}

因为LargeObjectMapSpace是每次使用都调用mmap申请内存,所以它的使用的内存空间是非连续的

FreeListMapSpace

image.png image.png FreeList是一种用来实现特定动态内存分配方案的数据结构,它也是预先通过Mmap申请一整个大块内存,因此它分配的对象内存空间是连续的,在数据结构上,它使用一个AllocationInfo链表记录分配的内存信息,每个AllocationInfo分别记录前一个AllocationInfo的空闲大小,以及当前AllocationInfo已经分配的大小。关于它的申请和释放的工作原理可以参考wiki上的解释以及源码来理解,这里不做详述。

在ART中使用LargeObjectMapSpace还是FreeListMapSpace是根据CPU架构来的,至于具体原因按注释的解释是跟mmap的 msync性能有关 image.png **

DlMallocSpace与RosAllocSpace

DlMallocSpaceRosAllocSpace 是MallocSpace的两种具体实现,它们内部分别使用allocator目录下的dlmalloc 和rosalloc内存分配算法进行内存分配,其中DlMalloc是由 Doug Lea在1987开始设计的,glibc中的malloc内存分配也是采用的dlmalloc的算法。 dlMalloc可以认为是一个较为通过的内存分配算法,而 android为了进一步进行优化,自己实现了另一种 算法叫rosalloc ,rosalloc是 runs-of-slots的简称,rosalloc 的工作需要ART虚拟中的其他模块进行配合,这从两个分配算法源码中所引入的头文件也能看出来。 image.png image.png 由于篇幅有限,本文不具体分析这两种算法具体实现的细节。在目前版本的ART虚拟机实现中,在大部分场景下采用的是 rosalloc的实现 (在当前源码中,除了在Heap构造函数中,创建zygote / non_moving_space_时用的是DlMallocSpace)。

image.png image.png 从CreateMallocSpaceFromMemMap可以看出默认情况下都是用的Ros( kUseRosAlloc默认为true)

image.png

BumpPointerSpace

BumpPointerSapce 是ContinuousMemMapAllocSpace的其中一个具体实现,因此 BumpPointerSpace预先申请的内存空间也是连续的。BumpPointer的意思是指针碰撞,即内部分配的对象直接相邻,在具体实现时通会过一个指针记录bytes_allocated上次分配后的内存位置,每次进行新的对象内存分配后更新位置指针。这种分配算法的实现非常简单,然而由于只记录了最后分配的内存位置,因此无法只释放某个Object对象所占的内存,只能整体进行释放。 image.png BumpPointerSpace的创建过程很简单,内部主要是先通过MemMap::MapAnonymous创建MemMap

namespace art {
namespace gc {
namespace space {

BumpPointerSpace* BumpPointerSpace::Create(const std::string& name, size_t capacity) {
  capacity = RoundUp(capacity, kPageSize);
  std::string error_msg;
  MemMap mem_map = MemMap::MapAnonymous(name.c_str(),
                                        capacity,
                                        PROT_READ | PROT_WRITE,
                                        /*low_4gb=*/ true,
                                        &error_msg);
  if (!mem_map.IsValid()) {
    LOG(ERROR) << "Failed to allocate pages for alloc space (" << name << ") of size "
        << PrettySize(capacity) << " with message " << error_msg;
    return nullptr;
  }
  return new BumpPointerSpace(name, std::move(mem_map));
}

BumpPointerSpace::BumpPointerSpace(const std::string& name, MemMap&& mem_map)
        : ContinuousMemMapAllocSpace(name,
                                     std::move(mem_map),
                                     mem_map.Begin(),
                                     mem_map.Begin(),
                                     mem_map.End(),
                                     kGcRetentionPolicyAlwaysCollect),
    growth_end_(mem_map_.End()),
    objects_allocated_(0), bytes_allocated_(0),
    block_lock_("Block lock", kBumpPointerSpaceBlockLock),
    main_block_size_(0),
    num_blocks_(0) {
}
    
    
}
}

RegionSpace

RegionSpace的创建过程

RegionSpace内部首先将内存空间划分为多个相同大小的区块(Region),每个Region大小为256KB,在为对象分配内存时,RegionSpace以Region为单位查找符号要求的Region用于分配内存。

 static constexpr size_t kRegionSize = 256 * KB; //Region大小

我们首先分析下RegionSpace的创建过程

RegionSpace::RegionSpace(const std::string& name, MemMap&& mem_map, bool use_generational_cc)
    : ContinuousMemMapAllocSpace(name,
                                 std::move(mem_map),
                                 mem_map.Begin(),
                                 mem_map.End(),
                                 mem_map.End(),
                                 kGcRetentionPolicyAlwaysCollect),
      region_lock_("Region lock", kRegionSpaceRegionLock),
      use_generational_cc_(use_generational_cc),
      time_(1U),
      num_regions_(mem_map_.Size() / kRegionSize),
      madvise_time_(0U),
      num_non_free_regions_(0U),
      num_evac_regions_(0U),
      max_peak_num_non_free_regions_(0U),
      non_free_region_index_limit_(0U),
      current_region_(&full_region_),
      evac_region_(nullptr),
      cyclic_alloc_region_index_(0U) {
  CHECK_ALIGNED(mem_map_.Size(), kRegionSize);
  CHECK_ALIGNED(mem_map_.Begin(), kRegionSize);
  DCHECK_GT(num_regions_, 0U);
  regions_.reset(new Region[num_regions_]);
  uint8_t* region_addr = mem_map_.Begin();
  for (size_t i = 0; i < num_regions_; ++i, region_addr += kRegionSize) {
    regions_[i].Init(i, region_addr, region_addr + kRegionSize);
  }
  mark_bitmap_ =
      accounting::ContinuousSpaceBitmap::Create("region space live bitmap", Begin(), Capacity());
  if (kIsDebugBuild) {
    CHECK_EQ(regions_[0].Begin(), Begin());
    for (size_t i = 0; i < num_regions_; ++i) {
      CHECK(regions_[i].IsFree());
      CHECK_EQ(static_cast<size_t>(regions_[i].End() - regions_[i].Begin()), kRegionSize);
      if (i + 1 < num_regions_) {
        CHECK_EQ(regions_[i].End(), regions_[i + 1].Begin());
      }
    }
    CHECK_EQ(regions_[num_regions_ - 1].End(), Limit());
  }
  DCHECK(!full_region_.IsFree());
  DCHECK(full_region_.IsAllocated());
  size_t ignored;
  DCHECK(full_region_.Alloc(kAlignment, &ignored, nullptr, &ignored) == nullptr);
  // Protect the whole region space from the start.
  Protect();
}

每个Region的默认大小为 256KB,构造函数中根据mem_map的大小除以RegionSize 构造一个Region数组。 Region类包含的主要数据如下

private:
    static bool GetUseGenerationalCC();

    size_t idx_;                        // The region's index in the region space.
    size_t live_bytes_;                 // The live bytes. Used to compute the live percent.
    uint8_t* begin_;                    // region的起始地址
    Thread* thread_;                    // The owning thread if it's a tlab.
    // Note that `top_` can be higher than `end_` in the case of a
    // large region, where an allocated object spans multiple regions
    // (large region + one or more large tail regions).
    Atomic<uint8_t*> top_;              // 当前已分配内存的结束地址
    uint8_t* end_;                      // region的最大范围的地址
    // objects_allocated_ is accessed using memory_order_relaxed. Treat as approximate when there
    // are concurrent updates.
    Atomic<size_t> objects_allocated_;  // The number of objects allocated.
    uint32_t alloc_time_;               // The allocation time of the region.
    // Note that newly allocated and evacuated regions use -1 as
    // special value for `live_bytes_`.
    bool is_newly_allocated_;           // True if it's allocated after the last collection.
    bool is_a_tlab_;                    // True if it's a tlab.
    RegionState state_;                 // The region state (see RegionState).
    RegionType type_;                   // The region type (see RegionType).

    friend class RegionSpace;
  };

RegionState和RegionType分别用于区分Region的使用状态 以及Region的类型 State包含以下定义

 enum class RegionState : uint8_t {
    kRegionStateFree,            // 完全未使用的Region
    kRegionStateAllocated,       // 已经分配过内存的Region
    kRegionStateLarge,           // LargeRegion的Head,为LargeObject分配内存时需要连续跨越多个Region
    kRegionStateLargeTail,       // kRegionStateLarge的一部分,即非Head
  };

Type包含以下定义

 enum class RegionType : uint8_t {
    kRegionTypeAll,              // All types.
    kRegionTypeFromSpace,        // From-space. 只有FromSapce的Region用于为对象分配内存
    kRegionTypeUnevacFromSpace,  // 和GC有关, Unevacuated from-space. Not to be evacuated.
    kRegionTypeToSpace,          // 和GC有关,
    kRegionTypeNone,             // Region的初始状态
  };

对象分配过程

image.png

template<bool kForEvac>
inline mirror::Object* RegionSpace::AllocNonvirtual(size_t num_bytes,
                                                    /* out */ size_t* bytes_allocated,
                                                    /* out */ size_t* usable_size,
                                                    /* out */ size_t* bytes_tl_bulk_allocated) {
  DCHECK_ALIGNED(num_bytes, kAlignment);
  mirror::Object* obj;
  if (LIKELY(num_bytes <= kRegionSize)) { //对象大小小于 一个RegionSize
    // Non-large object. 调用ALloc尝试分配
    obj = (kForEvac ? evac_region_ : current_region_)->Alloc(num_bytes,
                                                             bytes_allocated,
                                                             usable_size,
                                                             bytes_tl_bulk_allocated);
    if (LIKELY(obj != nullptr)) { //分配成功则直接返回
      return obj;
    }
    //如果分配失败,则加锁再次尝试,这里是考虑到其他线程进行内存申请时,可能已经跟新了current)region
    MutexLock mu(Thread::Current(), region_lock_);
    // Retry with current region since another thread may have updated
    // current_region_ or evac_region_.  TODO: fix race.
    obj = (kForEvac ? evac_region_ : current_region_)->Alloc(num_bytes,
                                                             bytes_allocated,
                                                             usable_size,
                                                             bytes_tl_bulk_allocated);
    if (LIKELY(obj != nullptr)) { //分配成功直接返回
      return obj;
    }
    Region* r = AllocateRegion(kForEvac); 
    if (LIKELY(r != nullptr)) {
      obj = r->Alloc(num_bytes, bytes_allocated, usable_size, bytes_tl_bulk_allocated);
      CHECK(obj != nullptr);
      // Do our allocation before setting the region, this makes sure no threads race ahead
      // and fill in the region before we allocate the object. b/63153464
      if (kForEvac) {
        evac_region_ = r;
      } else { //更新 current_region
        current_region_ = r;
      }
      return obj;
    }
  } else { //对象大小大小 一个RegionSize,
    // Large object.
    obj = AllocLarge<kForEvac>(num_bytes, bytes_allocated, usable_size, bytes_tl_bulk_allocated);
    if (LIKELY(obj != nullptr)) {
      return obj;
    }
  }
  return nullptr;
}

这里我们分析代码的时候,先忽略到kForEvac变量,只分析kForEvac为false的分支, kForEvac变量和gc的复制清除过程相关

首先判断需要分配的对象大小,如果小于一个Region的大小(256KB)会先尝试从cureent_region的中分配,如果第一次分配失败,则会加锁再次尝试分配,如果还是分配失败则调用AllocateRegion 获取Region,并从新的Region中尝试分配,并将其设置为cureent_region。 首先看下 Alloc函数的具体实现

inline mirror::Object* RegionSpace::Region::Alloc(size_t num_bytes,
                                                  /* out */ size_t* bytes_allocated,
                                                  /* out */ size_t* usable_size,
                                                  /* out */ size_t* bytes_tl_bulk_allocated) {
  DCHECK(IsAllocated() && IsInToSpace());
  DCHECK_ALIGNED(num_bytes, kAlignment);
  uint8_t* old_top;
  uint8_t* new_top;
  do {
    old_top = top_.load(std::memory_order_relaxed);
    new_top = old_top + num_bytes;
    if (UNLIKELY(new_top > end_)) {
      return nullptr;
    }
  } while (!top_.CompareAndSetWeakRelaxed(old_top, new_top));
  objects_allocated_.fetch_add(1, std::memory_order_relaxed);
  DCHECK_LE(Top(), end_);
  DCHECK_LT(old_top, end_);
  DCHECK_LE(new_top, end_);
  *bytes_allocated = num_bytes;
  if (usable_size != nullptr) {
    *usable_size = num_bytes;
  }
  *bytes_tl_bulk_allocated = num_bytes;
  return reinterpret_cast<mirror::Object*>(old_top);
}

可以看出从Region中分配内存的实现 主要是判断 top_ +numbytes 后有没有超出 Region的范围(end_),如果没有超出则分配成功并更新 top_指针,如果超出则分配失败 返回nullptr。

AllocateRegion的实现也比较简单,遍历regions数组,找到第一个 state为kRegionStateFree 的Region

RegionSpace::Region* RegionSpace::AllocateRegion(bool for_evac) {
  if (!for_evac && (num_non_free_regions_ + 1) * 2 > num_regions_) {
    return nullptr;
  }
  for (size_t i = 0; i < num_regions_; ++i) {
    // When using the cyclic region allocation strategy, try to
    // allocate a region starting from the last cyclic allocated
    // region marker. Otherwise, try to allocate a region starting
    // from the beginning of the region space.
    //kCyclicRegionAllocation 只会在debug构建中打开,
    size_t region_index = kCyclicRegionAllocation
        ? ((cyclic_alloc_region_index_ + i) % num_regions_)
        : i;
    Region* r = &regions_[region_index];
    if (r->IsFree()) {
      r->Unfree(this, time_);
      if (use_generational_cc_) {
        // TODO: Add an explanation for this assertion.
        DCHECK(!for_evac || !r->is_newly_allocated_);
      }
      if (for_evac) {
        ++num_evac_regions_;
        TraceHeapSize();
        // Evac doesn't count as newly allocated.
      } else {
        r->SetNewlyAllocated();
        ++num_non_free_regions_;
      }
      if (kCyclicRegionAllocation) {
        // Move the cyclic allocation region marker to the region
        // following the one that was just allocated.
        cyclic_alloc_region_index_ = (region_index + 1) % num_regions_;
      }
      return r;
    }
  }
  return nullptr;
}

总结

本节介绍了Space的继承体系,以及每个Space的特点,虚拟机在构造Heap的过程中会根据不同的需求选择的不同Space的,因此我们主要是了解Space的以下特征就行

  • Space内的空间是否连续
  • Space中分配的内存块是如何组织的 (从空间分布、是否需要额外字段标识、是否会有碎片化问题等来理解)

Heap 构造分析

具备了对Space体系的结构概念后,我们再反过来分析下Heap构造函数中 创建Space的部分。

Heap类的主要成员结构

首先看下Heap类的一些主要成员变量

// All-known continuous spaces, where objects lie within fixed bounds.
  std::vector<space::ContinuousSpace*> continuous_spaces_ GUARDED_BY(Locks::mutator_lock_);

  // All-known discontinuous spaces, where objects may be placed throughout virtual memory.
  std::vector<space::DiscontinuousSpace*> discontinuous_spaces_ GUARDED_BY(Locks::mutator_lock_);

  // All-known alloc spaces, where objects may be or have been allocated.
  std::vector<space::AllocSpace*> alloc_spaces_;

  // A space where non-movable objects are allocated, when compaction is enabled it contains
  // Classes, ArtMethods, ArtFields, and non moving objects.
  space::MallocSpace* non_moving_space_;

  //rosalloc_space_和dlmalloc_space只会存在一个
  // Space which we use for the kAllocatorTypeROSAlloc.
  space::RosAllocSpace* rosalloc_space_;

  // Space which we use for the kAllocatorTypeDlMalloc.
  space::DlMallocSpace* dlmalloc_space_;

  // The main space is the space which the GC copies to and from on process state updates. This
  // space is typically either the dlmalloc_space_ or the rosalloc_space_.
  space::MallocSpace* main_space_;

  // The large object space we are currently allocating into.
  space::LargeObjectSpace* large_object_space_;

  // The card table, dirtied by the write barrier.
  std::unique_ptr<accounting::CardTable> card_table_;

  std::unique_ptr<accounting::ReadBarrierTable> rb_table_;

  // A mod-union table remembers all of the references from the it's space to other spaces.
  AllocationTrackingSafeMap<space::Space*, accounting::ModUnionTable*, kAllocatorTagHeap>
      mod_union_tables_;

  // A remembered set remembers all of the references from the it's space to the target space.
  AllocationTrackingSafeMap<space::Space*, accounting::RememberedSet*, kAllocatorTagHeap>
      remembered_sets_;

  // 当前collector 类型
  CollectorType collector_type_;
  // Which collector we use when the app is in the foreground.
  CollectorType foreground_collector_type_;
  // Which collector we will use when the app is notified of a transition to background.
  CollectorType background_collector_type_;
  // Desired collector type, heap trimming daemon transitions the heap if it is != collector_type_.
  CollectorType desired_collector_type_;

  // Lock which guards pending tasks.
  Mutex* pending_task_lock_ DEFAULT_MUTEX_ACQUIRED_AFTER;

  //...
  // Pointer to the space which becomes the new main space when we do homogeneous space compaction.
  // Use unique_ptr since the space is only added during the homogeneous compaction phase.
  std::unique_ptr<space::MallocSpace> main_space_backup_;

 
  • continuous spaces  保存Heap构造函数中创建的continuousSpace类型的Space实例
  • discontinuous spaces  保存Heap造函数中创建的discontinuousContinuousSpace类型的Space实例
  • alloc spaces  保存allocSpace类型的Space
  • main_space 主要的内存分配space
  • ...

Heap构造函数逻辑

Heap.h中定义的一些默认参数

  //heap.h
  static constexpr size_t kDefaultStartingSize = kPageSize;
  static constexpr size_t kDefaultInitialSize = 2 * MB;
  static constexpr size_t kDefaultMaximumSize = 256 * MB;
  //默认的nonMovingSpace大小
  static constexpr size_t kDefaultNonMovingSpaceCapacity = 64 * MB;
  static constexpr size_t kDefaultMaxFree = 2 * MB;
  static constexpr size_t kDefaultMinFree = kDefaultMaxFree / 4;
  static constexpr size_t kDefaultLongPauseLogThreshold = MsToNs(5);
  static constexpr size_t kDefaultLongGCLogThreshold = MsToNs(100);
  static constexpr size_t kDefaultTLABSize = 32 * KB;
  static constexpr double kDefaultTargetUtilization = 0.75;
  static constexpr double kDefaultHeapGrowthMultiplier = 2.0;
heap.cc
#if defined(__LP64__) || !defined(ADDRESS_SANITIZER)
// 300 MB (0x12c00000) - (default non-moving space capacity).
uint8_t* const Heap::kPreferredAllocSpaceBegin =
    reinterpret_cast<uint8_t*>(300 * MB - kDefaultNonMovingSpaceCapacity);
#else
#ifdef __ANDROID__
// For 32-bit Android, use 0x20000000 because asan reserves 0x04000000 - 0x20000000.
uint8_t* const Heap::kPreferredAllocSpaceBegin = reinterpret_cast<uint8_t*>(0x20000000);
#else
// For 32-bit host, use 0x40000000 because asan uses most of the space below this.
uint8_t* const Heap::kPreferredAllocSpaceBegin = reinterpret_cast<uint8_t*>(0x40000000);
#endif
#endif
Heap::Heap(size_t initial_size,
           size_t growth_limit,
           //...
           size_t non_moving_space_capacity,
           const std::vector<std::string>& boot_class_path,
           const std::vector<std::string>& boot_class_path_locations,
           const std::string& image_file_name,
          //...参数太多了,省略
          ) {
  if (VLOG_IS_ON(heap) || VLOG_IS_ON(startup)) {
    LOG(INFO) << "Heap() entering";
  }
  if (kUseReadBarrier) {
    CHECK_EQ(foreground_collector_type_, kCollectorTypeCC);
    CHECK_EQ(background_collector_type_, kCollectorTypeCCBackground);
  } else if (background_collector_type_ != gc::kCollectorTypeHomogeneousSpaceCompact) {
    CHECK_EQ(IsMovingGc(foreground_collector_type_), IsMovingGc(background_collector_type_))
        << "Changing from " << foreground_collector_type_ << " to "
        << background_collector_type_ << " (or visa versa) is not supported.";
  }
  verification_.reset(new Verification(this));
  CHECK_GE(large_object_threshold, kMinLargeObjectThreshold);
  ScopedTrace trace(__FUNCTION__);
  Runtime* const runtime = Runtime::Current();
  // If we aren't the zygote, switch to the default non zygote allocator. This may update the
  // entrypoints.
  const bool is_zygote = runtime->IsZygote();
  if (!is_zygote) {
    // Background compaction is currently not supported for command line runs.
    if (background_collector_type_ != foreground_collector_type_) {
      VLOG(heap) << "Disabling background compaction for non zygote";
      background_collector_type_ = foreground_collector_type_;
    }
  }
  ChangeCollector(desired_collector_type_);
  live_bitmap_.reset(new accounting::HeapBitmap(this));
  mark_bitmap_.reset(new accounting::HeapBitmap(this));

  // We don't have hspace compaction enabled with CC.
  if (foreground_collector_type_ == kCollectorTypeCC) {
    use_homogeneous_space_compaction_for_oom_ = false;
  }
  bool support_homogeneous_space_compaction =
      background_collector_type_ == gc::kCollectorTypeHomogeneousSpaceCompact ||
      use_homogeneous_space_compaction_for_oom_;
  // We may use the same space the main space for the non moving space if we don't need to compact
  // from the main space.
  // This is not the case if we support homogeneous compaction or have a moving background
  // collector type.
  bool separate_non_moving_space = is_zygote ||
      support_homogeneous_space_compaction || IsMovingGc(foreground_collector_type_) ||
      IsMovingGc(background_collector_type_);

  // Requested begin for the alloc space, to follow the mapped image and oat files
  uint8_t* request_begin = nullptr;
  // Calculate the extra space required after the boot image, see allocations below.
  size_t heap_reservation_size = 0u;
  if (separate_non_moving_space) {
    heap_reservation_size = non_moving_space_capacity;
  } else if (foreground_collector_type_ != kCollectorTypeCC && is_zygote) {
    heap_reservation_size = capacity_;
  }
  heap_reservation_size = RoundUp(heap_reservation_size, kPageSize);
  // Load image space(s).
  std::vector<std::unique_ptr<space::ImageSpace>> boot_image_spaces;
  MemMap heap_reservation;
  if (space::ImageSpace::LoadBootImage(boot_class_path,
                                       boot_class_path_locations,
                                       image_file_name,
                                       image_instruction_set,
                                       runtime->ShouldRelocate(),
                                       /*executable=*/ !runtime->IsAotCompiler(),
                                       heap_reservation_size,
                                       &boot_image_spaces,
                                       &heap_reservation)) {
    DCHECK_EQ(heap_reservation_size, heap_reservation.IsValid() ? heap_reservation.Size() : 0u);
    DCHECK(!boot_image_spaces.empty());
    request_begin = boot_image_spaces.back()->GetImageHeader().GetOatFileEnd();
    DCHECK(!heap_reservation.IsValid() || request_begin == heap_reservation.Begin())
        << "request_begin=" << static_cast<const void*>(request_begin)
        << " heap_reservation.Begin()=" << static_cast<const void*>(heap_reservation.Begin());
    for (std::unique_ptr<space::ImageSpace>& space : boot_image_spaces) {
      boot_image_spaces_.push_back(space.get());
      AddSpace(space.release());
    }
    boot_images_start_address_ = PointerToLowMemUInt32(boot_image_spaces_.front()->Begin());
    uint32_t boot_images_end =
        PointerToLowMemUInt32(boot_image_spaces_.back()->GetImageHeader().GetOatFileEnd());
    boot_images_size_ = boot_images_end - boot_images_start_address_;
    if (kIsDebugBuild) {
      VerifyBootImagesContiguity(boot_image_spaces_);
    }
  } else {
    if (foreground_collector_type_ == kCollectorTypeCC) {
      // Need to use a low address so that we can allocate a contiguous 2 * Xmx space
      // when there's no image (dex2oat for target).
      request_begin = kPreferredAllocSpaceBegin;
    }
    // Gross hack to make dex2oat deterministic.
    if (foreground_collector_type_ == kCollectorTypeMS && Runtime::Current()->IsAotCompiler()) {
      // Currently only enabled for MS collector since that is what the deterministic dex2oat uses.
      // b/26849108
      request_begin = reinterpret_cast<uint8_t*>(kAllocSpaceBeginForDeterministicAoT);
    }
  }

  /*
  requested_alloc_space_begin ->     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
                                     +-  nonmoving space (non_moving_space_capacity)+-
                                     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
                                     +-????????????????????????????????????????????+-
                                     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
                                     +-main alloc space / bump space 1 (capacity_) +-
                                     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
                                     +-????????????????????????????????????????????+-
                                     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
                                     +-main alloc space2 / bump space 2 (capacity_)+-
                                     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
  */

  MemMap main_mem_map_1;
  MemMap main_mem_map_2;

  std::string error_str;
  MemMap non_moving_space_mem_map;
  if (separate_non_moving_space) {
    ScopedTrace trace2("Create separate non moving space");
    // If we are the zygote, the non moving space becomes the zygote space when we run
    // PreZygoteFork the first time. In this case, call the map "zygote space" since we can't
    // rename the mem map later.
    const char* space_name = is_zygote ? kZygoteSpaceName : kNonMovingSpaceName;
    // Reserve the non moving mem map before the other two since it needs to be at a specific
    // address.
    DCHECK_EQ(heap_reservation.IsValid(), !boot_image_spaces_.empty());
    if (heap_reservation.IsValid()) {
      non_moving_space_mem_map = heap_reservation.RemapAtEnd(
          heap_reservation.Begin(), space_name, PROT_READ | PROT_WRITE, &error_str);
    } else {
      non_moving_space_mem_map = MapAnonymousPreferredAddress(
          space_name, request_begin, non_moving_space_capacity, &error_str);
    }
    CHECK(non_moving_space_mem_map.IsValid()) << error_str;
    DCHECK(!heap_reservation.IsValid());
    // Try to reserve virtual memory at a lower address if we have a separate non moving space.
     // kPreferredAllocSpaceBegin 固定为 12c00000 , non_moving_space_capacity数值为(64 * MB)
    request_begin = kPreferredAllocSpaceBegin + non_moving_space_capacity;
  }
  // Attempt to create 2 mem maps at or after the requested begin.
  if (foreground_collector_type_ != kCollectorTypeCC) {
    ScopedTrace trace2("Create main mem map");
    if (separate_non_moving_space || !is_zygote) {
      main_mem_map_1 = MapAnonymousPreferredAddress(
          kMemMapSpaceName[0], request_begin, capacity_, &error_str);
    } else {
      // If no separate non-moving space and we are the zygote, the main space must come right after
      // the image space to avoid a gap. This is required since we want the zygote space to be
      // adjacent to the image space.
      DCHECK_EQ(heap_reservation.IsValid(), !boot_image_spaces_.empty());
      main_mem_map_1 = MemMap::MapAnonymous(
          kMemMapSpaceName[0],
          request_begin,
          capacity_,
          PROT_READ | PROT_WRITE,
          /* low_4gb= */ true,
          /* reuse= */ false,
          heap_reservation.IsValid() ? &heap_reservation : nullptr,
          &error_str);
    }
    CHECK(main_mem_map_1.IsValid()) << error_str;
    DCHECK(!heap_reservation.IsValid());
  }
  if (support_homogeneous_space_compaction ||
      background_collector_type_ == kCollectorTypeSS ||
      foreground_collector_type_ == kCollectorTypeSS) {
    ScopedTrace trace2("Create main mem map 2");
    main_mem_map_2 = MapAnonymousPreferredAddress(
        kMemMapSpaceName[1], main_mem_map_1.End(), capacity_, &error_str);
    CHECK(main_mem_map_2.IsValid()) << error_str;
  }

  // Create the non moving space first so that bitmaps don't take up the address range.
  if (separate_non_moving_space) {
    ScopedTrace trace2("Add non moving space");
    // Non moving space is always dlmalloc since we currently don't have support for multiple
    // active rosalloc spaces.
    const size_t size = non_moving_space_mem_map.Size();
    const void* non_moving_space_mem_map_begin = non_moving_space_mem_map.Begin();
    non_moving_space_ = space::DlMallocSpace::CreateFromMemMap(std::move(non_moving_space_mem_map),
                                                               "zygote / non moving space",
                                                               kDefaultStartingSize,
                                                               initial_size,
                                                               size,
                                                               size,
                                                               /* can_move_objects= */ false);
    CHECK(non_moving_space_ != nullptr) << "Failed creating non moving space "
        << non_moving_space_mem_map_begin;
    non_moving_space_->SetFootprintLimit(non_moving_space_->Capacity());
    AddSpace(non_moving_space_);
  }
  // Create other spaces based on whether or not we have a moving GC.
  if (foreground_collector_type_ == kCollectorTypeCC) {
    CHECK(separate_non_moving_space);
    // Reserve twice the capacity, to allow evacuating every region for explicit GCs.
    //  kRegionSpaceName值为 main space (region space)
    MemMap region_space_mem_map =
        space::RegionSpace::CreateMemMap(kRegionSpaceName, capacity_ * 2, request_begin);
    CHECK(region_space_mem_map.IsValid()) << "No region space mem map";
    region_space_ = space::RegionSpace::Create(
        kRegionSpaceName, std::move(region_space_mem_map), use_generational_cc_);
    AddSpace(region_space_);
  } else if (IsMovingGc(foreground_collector_type_)) {
    // Create bump pointer spaces.
    // We only to create the bump pointer if the foreground collector is a compacting GC.
    // TODO: Place bump-pointer spaces somewhere to minimize size of card table.
    bump_pointer_space_ = space::BumpPointerSpace::CreateFromMemMap("Bump pointer space 1",
                                                                    std::move(main_mem_map_1));
    CHECK(bump_pointer_space_ != nullptr) << "Failed to create bump pointer space";
    AddSpace(bump_pointer_space_);
    temp_space_ = space::BumpPointerSpace::CreateFromMemMap("Bump pointer space 2",
                                                            std::move(main_mem_map_2));
    CHECK(temp_space_ != nullptr) << "Failed to create bump pointer space";
    AddSpace(temp_space_);
    CHECK(separate_non_moving_space);
  } else {
    CreateMainMallocSpace(std::move(main_mem_map_1), initial_size, growth_limit_, capacity_);
    CHECK(main_space_ != nullptr);
    AddSpace(main_space_);
    if (!separate_non_moving_space) {
      non_moving_space_ = main_space_;
      CHECK(!non_moving_space_->CanMoveObjects());
    }
    if (main_mem_map_2.IsValid()) {
      const char* name = kUseRosAlloc ? kRosAllocSpaceName[1] : kDlMallocSpaceName[1];
      main_space_backup_.reset(CreateMallocSpaceFromMemMap(std::move(main_mem_map_2),
                                                           initial_size,
                                                           growth_limit_,
                                                           capacity_,
                                                           name,
                                                           /* can_move_objects= */ true));
      CHECK(main_space_backup_.get() != nullptr);
      // Add the space so its accounted for in the heap_begin and heap_end.
      AddSpace(main_space_backup_.get());
    }
  }
  CHECK(non_moving_space_ != nullptr);
  CHECK(!non_moving_space_->CanMoveObjects());
  // Allocate the large object space.
  if (large_object_space_type == space::LargeObjectSpaceType::kFreeList) {
    large_object_space_ = space::FreeListSpace::Create("free list large object space", capacity_);
    CHECK(large_object_space_ != nullptr) << "Failed to create large object space";
  } else if (large_object_space_type == space::LargeObjectSpaceType::kMap) {
    large_object_space_ = space::LargeObjectMapSpace::Create("mem map large object space");
    CHECK(large_object_space_ != nullptr) << "Failed to create large object space";
  } else {
    // Disable the large object space by making the cutoff excessively large.
    large_object_threshold_ = std::numeric_limits<size_t>::max();
    large_object_space_ = nullptr;
  }
  if (large_object_space_ != nullptr) {
    AddSpace(large_object_space_);
  }
  // Compute heap capacity. Continuous spaces are sorted in order of Begin().
  CHECK(!continuous_spaces_.empty());
  // Relies on the spaces being sorted.
  uint8_t* heap_begin = continuous_spaces_.front()->Begin();
  uint8_t* heap_end = continuous_spaces_.back()->Limit();
  size_t heap_capacity = heap_end - heap_begin;
  // Remove the main backup space since it slows down the GC to have unused extra spaces.
  // TODO: Avoid needing to do this.
  if (main_space_backup_.get() != nullptr) {
    RemoveSpace(main_space_backup_.get());
  }
  // Allocate the card table.
  // We currently don't support dynamically resizing the card table.
  // Since we don't know where in the low_4gb the app image will be located, make the card table
  // cover the whole low_4gb. TODO: Extend the card table in AddSpace.
  UNUSED(heap_capacity);
  // Start at 4 KB, we can be sure there are no spaces mapped this low since the address range is
  // reserved by the kernel.
  static constexpr size_t kMinHeapAddress = 4 * KB;
  card_table_.reset(accounting::CardTable::Create(reinterpret_cast<uint8_t*>(kMinHeapAddress),
                                                  4 * GB - kMinHeapAddress));
  CHECK(card_table_.get() != nullptr) << "Failed to create card table";
  if (foreground_collector_type_ == kCollectorTypeCC && kUseTableLookupReadBarrier) {
    rb_table_.reset(new accounting::ReadBarrierTable());
    DCHECK(rb_table_->IsAllCleared());
  }
  if (HasBootImageSpace()) {
    // Don't add the image mod union table if we are running without an image, this can crash if
    // we use the CardCache implementation.
    for (space::ImageSpace* image_space : GetBootImageSpaces()) {
      accounting::ModUnionTable* mod_union_table = new accounting::ModUnionTableToZygoteAllocspace(
          "Image mod-union table", this, image_space);
      CHECK(mod_union_table != nullptr) << "Failed to create image mod-union table";
      AddModUnionTable(mod_union_table);
    }
  }
  if (collector::SemiSpace::kUseRememberedSet && non_moving_space_ != main_space_) {
    accounting::RememberedSet* non_moving_space_rem_set =
        new accounting::RememberedSet("Non-moving space remembered set", this, non_moving_space_);
    CHECK(non_moving_space_rem_set != nullptr) << "Failed to create non-moving space remembered set";
    AddRememberedSet(non_moving_space_rem_set);
  }
  // TODO: Count objects in the image space here?
  num_bytes_allocated_.store(0, std::memory_order_relaxed);
  mark_stack_.reset(accounting::ObjectStack::Create("mark stack", kDefaultMarkStackSize,
                                                    kDefaultMarkStackSize));
  const size_t alloc_stack_capacity = max_allocation_stack_size_ + kAllocationStackReserveSize;
  allocation_stack_.reset(accounting::ObjectStack::Create(
      "allocation stack", max_allocation_stack_size_, alloc_stack_capacity));
  live_stack_.reset(accounting::ObjectStack::Create(
      "live stack", max_allocation_stack_size_, alloc_stack_capacity));
  // It's still too early to take a lock because there are no threads yet, but we can create locks
  // now. We don't create it earlier to make it clear that you can't use locks during heap
  // initialization.
  gc_complete_lock_ = new Mutex("GC complete lock");
  gc_complete_cond_.reset(new ConditionVariable("GC complete condition variable",
                                                *gc_complete_lock_));

  thread_flip_lock_ = new Mutex("GC thread flip lock");
  thread_flip_cond_.reset(new ConditionVariable("GC thread flip condition variable",
                                                *thread_flip_lock_));
  task_processor_.reset(new TaskProcessor());
  reference_processor_.reset(new ReferenceProcessor());
  pending_task_lock_ = new Mutex("Pending task lock");
  if (ignore_target_footprint_) {
    SetIdealFootprint(std::numeric_limits<size_t>::max());
    concurrent_start_bytes_ = std::numeric_limits<size_t>::max();
  }
  CHECK_NE(target_footprint_.load(std::memory_order_relaxed), 0U);
  // Create our garbage collectors.
  for (size_t i = 0; i < 2; ++i) {
    const bool concurrent = i != 0;
    if ((MayUseCollector(kCollectorTypeCMS) && concurrent) ||
        (MayUseCollector(kCollectorTypeMS) && !concurrent)) {
      garbage_collectors_.push_back(new collector::MarkSweep(this, concurrent));
      garbage_collectors_.push_back(new collector::PartialMarkSweep(this, concurrent));
      garbage_collectors_.push_back(new collector::StickyMarkSweep(this, concurrent));
    }
  }
  if (kMovingCollector) {
    if (MayUseCollector(kCollectorTypeSS) ||
        MayUseCollector(kCollectorTypeHomogeneousSpaceCompact) ||
        use_homogeneous_space_compaction_for_oom_) {
      semi_space_collector_ = new collector::SemiSpace(this);
      garbage_collectors_.push_back(semi_space_collector_);
    }
    if (MayUseCollector(kCollectorTypeCC)) {
      concurrent_copying_collector_ = new collector::ConcurrentCopying(this,
                                                                       /*young_gen=*/false,
                                                                       use_generational_cc_,
                                                                       "",
                                                                       measure_gc_performance);
      if (use_generational_cc_) {
        young_concurrent_copying_collector_ = new collector::ConcurrentCopying(
            this,
            /*young_gen=*/true,
            use_generational_cc_,
            "young",
            measure_gc_performance);
      }
      active_concurrent_copying_collector_.store(concurrent_copying_collector_,
                                                 std::memory_order_relaxed);
      DCHECK(region_space_ != nullptr);
      concurrent_copying_collector_->SetRegionSpace(region_space_);
      if (use_generational_cc_) {
        young_concurrent_copying_collector_->SetRegionSpace(region_space_);
        // At this point, non-moving space should be created.
        DCHECK(non_moving_space_ != nullptr);
        concurrent_copying_collector_->CreateInterRegionRefBitmaps();
      }
      garbage_collectors_.push_back(concurrent_copying_collector_);
      if (use_generational_cc_) {
        garbage_collectors_.push_back(young_concurrent_copying_collector_);
      }
    }
  }
  if (!GetBootImageSpaces().empty() && non_moving_space_ != nullptr &&
      (is_zygote || separate_non_moving_space)) {
    // Check that there's no gap between the image space and the non moving space so that the
    // immune region won't break (eg. due to a large object allocated in the gap). This is only
    // required when we're the zygote.
    // Space with smallest Begin().
    space::ImageSpace* first_space = nullptr;
    for (space::ImageSpace* space : boot_image_spaces_) {
      if (first_space == nullptr || space->Begin() < first_space->Begin()) {
        first_space = space;
      }
    }
    bool no_gap = MemMap::CheckNoGaps(*first_space->GetMemMap(), *non_moving_space_->GetMemMap());
    if (!no_gap) {
      PrintFileToLog("/proc/self/maps", LogSeverity::ERROR);
      MemMap::DumpMaps(LOG_STREAM(ERROR), /* terse= */ true);
      LOG(FATAL) << "There's a gap between the image space and the non-moving space";
    }
  }
  // Perfetto Java Heap Profiler Support.
  if (runtime->IsPerfettoJavaHeapStackProfEnabled()) {
    // Perfetto Plugin is loaded and enabled, initialize the Java Heap Profiler.
    InitPerfettoJavaHeapProf();
  } else {
    // Disable the Java Heap Profiler.
    GetHeapSampler().DisableHeapSampler(/*disable_ptr=*/nullptr, /*disable_info_ptr=*/nullptr);
  }

  instrumentation::Instrumentation* const instrumentation = runtime->GetInstrumentation();
  if (gc_stress_mode_) {
    backtrace_lock_ = new Mutex("GC complete lock");
  }
  if (is_running_on_memory_tool_ || gc_stress_mode_) {
    instrumentation->InstrumentQuickAllocEntryPoints();
  }
  if (VLOG_IS_ON(heap) || VLOG_IS_ON(startup)) {
    LOG(INFO) << "Heap() exiting";
  }
}

入参关键参数说明 non_moving_space_cpacity 默认值为Heap.h中的kDefaultNonMovingSpaceCapacity (64MB) 其创建过程的主要逻辑为

  1. 使用ImageSpace加载 boot 镜像文件(.oat文件),镜像文件在进程内存中的映射地址为 ART_BASE_ADDRESS(0x70000000)加上一个运行时的随机数
  2. separate non moving space (这部分代码中创建Space的逻辑 与gc类型强相关)
    1. 创建non_moving_space_mem_map,其位置紧随在镜像位置的加载位置之后,大小为 64MB(kDefaultNonMovingSpaceCapacity),并更新request_begin
      1. non_moving_space_主要是用于加载一些较为常驻的对象,比如Classes、ArtMethods、ArtFields等
      2. non_moving_space 在preZygoteFork阶段会被分割成dalvik-zygote space 和non_moving_space
    2. 根据前后台的gc类型,选择性的创建大小为 capcity_的 main_mem_map_1和main_mem_map_2
      1. 前台为copy gc时都不创建
      2. 前台为moving gc类型时两个都创建
    3. 以non_moving_space_mem_map 创建类型为DlMallocSpace的Space空间 non_moving_space,其大小为Heap传入的第一个参数initial_size(对应虚拟机的 -Xms参数,做为堆的起始大小,默认为4MB)
    4. 接下来判断前台collector类型为 复制清除 还是标记清除
      1. 如果是复制清除: 创建一个2倍于 capcity_大小的 RegionSpace
      2. 如果是标记清楚:  创建两个BumpPointer,其对应的mmap为之前提前创建好的main_mem_map_1和main_mem_map_2
      3. 其他情况下调用以main_mem_map_1 为参数调用CreateMainMallocSpace函数,该函数内部 根据 kUseRosAlloc变量创建RosAllocSpace或 DlMallocSpace ,将dlmalloc_space或 rosalloc_space变量指向创建的Space,同时将main_space_指向创建的Space
  3. 创建 large_object_space ,其Space类型为 LargeObjectSpace
  4. 接下来的代码主要是创建一些帮助标记回收 space内垃圾对象的数据结构,篇幅有限不做分析,这些部分可以在分析GC实现机制的时候再详细了解。

总结下这部分的代码,主要是先将oat art文件映射到内存空间,之后再创建non_moving_space_其类型为DlMallocSpace,最后根据前后台的gc类型创建 普通对象申请内存时需要用到的Space

PreZygoteFork

为了优化应用程序所占的内存以及启动速度,除了第一个进程(Zygote进程),所有的Java进程都是由第一个进程(Zygote进程)Fork而来,每次Zygote进程fork子进程时,会先调用静态函数ZygoteHooks_nativePreFork 函数,该函数到Heap::PreZygoteFork的执行路径如下: ZygoteHooks_nativePreFork->Runtime::PreZygoteFork()-》Heap::PreZygoteFork()

void Runtime::PreZygoteFork() {
  if (GetJit() != nullptr) {
    GetJit()->PreZygoteFork();
  }
  heap_->PreZygoteFork();
  PreZygoteForkNativeBridge();
}
void Heap::PreZygoteFork() {
  if (!HasZygoteSpace()) {
    // We still want to GC in case there is some unreachable non moving objects that could cause a
    // suboptimal bin packing when we compact the zygote space.
    CollectGarbageInternal(collector::kGcTypeFull, kGcCauseBackground, false);
    // Trim the pages at the end of the non moving space. Trim while not holding zygote lock since
    // the trim process may require locking the mutator lock.
    non_moving_space_->Trim();
  }
  Thread* self = Thread::Current();
  MutexLock mu(self, zygote_creation_lock_);
  // Try to see if we have any Zygote spaces.
  if (HasZygoteSpace()) {
    return;
  }
  Runtime::Current()->GetInternTable()->AddNewTable();
  Runtime::Current()->GetClassLinker()->MoveClassTableToPreZygote();
  VLOG(heap) << "Starting PreZygoteFork";
  // The end of the non-moving space may be protected, unprotect it so that we can copy the zygote
  // there.
  non_moving_space_->GetMemMap()->Protect(PROT_READ | PROT_WRITE);
  const bool same_space = non_moving_space_ == main_space_;
  if (kCompactZygote) {
    // Temporarily disable rosalloc verification because the zygote
    // compaction will mess up the rosalloc internal metadata.
    ScopedDisableRosAllocVerification disable_rosalloc_verif(this);
    ZygoteCompactingCollector zygote_collector(this, is_running_on_memory_tool_);
    zygote_collector.BuildBins(non_moving_space_);
    // Create a new bump pointer space which we will compact into.
    space::BumpPointerSpace target_space("zygote bump space", non_moving_space_->End(),
                                         non_moving_space_->Limit());
    // Compact the bump pointer space to a new zygote bump pointer space.
    bool reset_main_space = false;
    if (IsMovingGc(collector_type_)) {
      if (collector_type_ == kCollectorTypeCC) {
        zygote_collector.SetFromSpace(region_space_);
      } else {
        zygote_collector.SetFromSpace(bump_pointer_space_);
      }
    } else {
      CHECK(main_space_ != nullptr);
      CHECK_NE(main_space_, non_moving_space_)
          << "Does not make sense to compact within the same space";
      // Copy from the main space.
      zygote_collector.SetFromSpace(main_space_);
      reset_main_space = true;
    }
    zygote_collector.SetToSpace(&target_space);
    zygote_collector.SetSwapSemiSpaces(false);
    zygote_collector.Run(kGcCauseCollectorTransition, false);
    if (reset_main_space) {
      main_space_->GetMemMap()->Protect(PROT_READ | PROT_WRITE);
      madvise(main_space_->Begin(), main_space_->Capacity(), MADV_DONTNEED);
      MemMap mem_map = main_space_->ReleaseMemMap();
      RemoveSpace(main_space_);
      space::Space* old_main_space = main_space_;
      CreateMainMallocSpace(std::move(mem_map),
                            kDefaultInitialSize,
                            std::min(mem_map.Size(), growth_limit_),
                            mem_map.Size());
      delete old_main_space;
      AddSpace(main_space_);
    } else {
      if (collector_type_ == kCollectorTypeCC) {
        region_space_->GetMemMap()->Protect(PROT_READ | PROT_WRITE);
        // Evacuated everything out of the region space, clear the mark bitmap.
        region_space_->GetMarkBitmap()->Clear();
      } else {
        bump_pointer_space_->GetMemMap()->Protect(PROT_READ | PROT_WRITE);
      }
    }
    if (temp_space_ != nullptr) {
      CHECK(temp_space_->IsEmpty());
    }
    IncrementFreedEver();
    // Update the end and write out image.
    non_moving_space_->SetEnd(target_space.End());
    non_moving_space_->SetLimit(target_space.Limit());
    VLOG(heap) << "Create zygote space with size=" << non_moving_space_->Size() << " bytes";
  }
  // Change the collector to the post zygote one.
  ChangeCollector(foreground_collector_type_);
  // Save the old space so that we can remove it after we complete creating the zygote space.
  space::MallocSpace* old_alloc_space = non_moving_space_;
  // Turn the current alloc space into a zygote space and obtain the new alloc space composed of
  // the remaining available space.
  // Remove the old space before creating the zygote space since creating the zygote space sets
  // the old alloc space's bitmaps to null.
  RemoveSpace(old_alloc_space);
  if (collector::SemiSpace::kUseRememberedSet) {
    // Consistency bound check.
    FindRememberedSetFromSpace(old_alloc_space)->AssertAllDirtyCardsAreWithinSpace();
    // Remove the remembered set for the now zygote space (the old
    // non-moving space). Note now that we have compacted objects into
    // the zygote space, the data in the remembered set is no longer
    // needed. The zygote space will instead have a mod-union table
    // from this point on.
    RemoveRememberedSet(old_alloc_space);
  }
  // Remaining space becomes the new non moving space.
  //  
  zygote_space_ = old_alloc_space->CreateZygoteSpace(kNonMovingSpaceName, low_memory_mode_,
                                                     &non_moving_space_);
  CHECK(!non_moving_space_->CanMoveObjects());
    
  //如果之前的main_space时执行zygoteSpace,则将其重置为新的non_moving_space  
  if (same_space) {
    main_space_ = non_moving_space_;
    SetSpaceAsDefault(main_space_);
  }
  delete old_alloc_space;
  CHECK(HasZygoteSpace()) << "Failed creating zygote space";
  AddSpace(zygote_space_);
  non_moving_space_->SetFootprintLimit(non_moving_space_->Capacity());
  AddSpace(non_moving_space_);
  constexpr bool set_mark_bit = kUseBakerReadBarrier
                                && gc::collector::ConcurrentCopying::kGrayDirtyImmuneObjects;
  if (set_mark_bit) {
    // Treat all of the objects in the zygote as marked to avoid unnecessary dirty pages. This is
    // safe since we mark all of the objects that may reference non immune objects as gray.
    zygote_space_->SetMarkBitInLiveObjects();
  }

  // Create the zygote space mod union table.
  accounting::ModUnionTable* mod_union_table =
      new accounting::ModUnionTableCardCache("zygote space mod-union table", this, zygote_space_);
  CHECK(mod_union_table != nullptr) << "Failed to create zygote space mod-union table";

  if (collector_type_ != kCollectorTypeCC) {
    // Set all the cards in the mod-union table since we don't know which objects contain references
    // to large objects.
    mod_union_table->SetCards();
  } else {
    // Make sure to clear the zygote space cards so that we don't dirty pages in the next GC. There
    // may be dirty cards from the zygote compaction or reference processing. These cards are not
    // necessary to have marked since the zygote space may not refer to any objects not in the
    // zygote or image spaces at this point.
    mod_union_table->ProcessCards();
    mod_union_table->ClearTable();

    // For CC we never collect zygote large objects. This means we do not need to set the cards for
    // the zygote mod-union table and we can also clear all of the existing image mod-union tables.
    // The existing mod-union tables are only for image spaces and may only reference zygote and
    // image objects.
    for (auto& pair : mod_union_tables_) {
      CHECK(pair.first->IsImageSpace());
      CHECK(!pair.first->AsImageSpace()->GetImageHeader().IsAppImage());
      accounting::ModUnionTable* table = pair.second;
      table->ClearTable();
    }
  }
  AddModUnionTable(mod_union_table);
  large_object_space_->SetAllLargeObjectsAsZygoteObjects(self, set_mark_bit);
  if (collector::SemiSpace::kUseRememberedSet) {
    // Add a new remembered set for the post-zygote non-moving space.
    accounting::RememberedSet* post_zygote_non_moving_space_rem_set =
        new accounting::RememberedSet("Post-zygote non-moving space remembered set", this,
                                      non_moving_space_);
    CHECK(post_zygote_non_moving_space_rem_set != nullptr)
        << "Failed to create post-zygote non-moving space remembered set";
    AddRememberedSet(post_zygote_non_moving_space_rem_set);
  }
}

Heap的PreZygoteFork函数是在zygote 进程中执行的,其主要逻辑为

  1. 如果是首次执行,则先进行一次FullGC,然后进行后续逻辑,如果是非首次执行则 zygoteSpace不为空,直接返回
  2. 使用ZygoteCompacingCollector GC 对non_moving_space_的空间进行一次垃圾回收
  3. zygoteSpace执行 使用zygote_collector执行一次compat操作,使non_moving_space内部的空间更紧凑
    1. 新的Space是BumpPointerSpace
  4. 将non_moving_space拆分成两部分,已使用的空间作为zygoteSpace,未使用的空间作为新的non_moving_space
  5. 将large_object_space中还存活的对象标记为ZygoteSpace

对本段代码的分析要了解一个事实,所有的Java进程都是由Zygote进程Fork而来,Zygote进程本身也需要分配一些对象,当进行fork时,系统把gc之后还存活的空间认为是通用的,因此把这部分空间命名为ZygoteSpace 它被所有进程共享(通过fork而来)。而压缩之后剩余的空间还是叫non_moving_space 它是每个进程独立的内存空间(fork出来的进程使用的non_moving_space部分)

虚拟机运行时的内存映射情况(space 部分)

我们可以通过 /prc/${procId}/maps 打印一个android进程的内存映射情况,帮助理解heap构造部分的代码。

12c00000-12d00000 rw-p 00000000 00:04 19279                              	/dev/ashmem/dalvik-main space (region space) (deleted)
12d00000-13780000 ---p 00100000 00:04 19279                              /dev/ashmem/dalvik-main space (region space) (deleted)
13780000-52c00000 rw-p 00b80000 00:04 19279                              /dev/ashmem/dalvik-main space (region space) (deleted)
700ee000-7029c000 rw-p 00000000 103:13 1540150                           /data/dalvik-cache/arm/system@framework@boot.art
7029c000-70338000 rw-p 00000000 103:13 1540151                           /data/dalvik-cache/arm/system@framework@boot-core-libart.art
70338000-70376000 rw-p 00000000 103:13 1540163                           /data/dalvik-cache/arm/system@framework@boot-conscrypt.art
70376000-70397000 rw-p 00000000 103:13 1540168                           /data/dalvik-cache/arm/system@framework@boot-okhttp.art
//...省略
708f7000-70ac2000 r--p 00000000 103:11 2385                              /system/framework/arm/boot.oat
72b9a000-72c2e000 rw-p 00000000 00:04 19276                              /dev/ashmem/dalvik-zygote space (deleted)
72c2e000-72c2f000 rw-p 00000000 00:04 25961                              /dev/ashmem/dalvik-non moving space (deleted)
72c2f000-72c4a000 rw-p 00001000 00:04 25961                              /dev/ashmem/dalvik-non moving space (deleted)
//...
72b9a000-72c2e000 rw-p 00000000 00:04 19276                              /dev/ashmem/dalvik-zygote space (deleted)
72c2e000-72c2f000 rw-p 00000000 00:04 25961                              /dev/ashmem/dalvik-non moving space (deleted)
72c2f000-72c4a000 rw-p 00001000 00:04 25961                              /dev/ashmem/dalvik-non moving space (deleted)
72c4a000-7639b000 ---p 0001c000 00:04 25961                              /dev/ashmem/dalvik-non moving space (deleted)
7639b000-76b9a000 rw-p 0376d000 00:04 25961                              /dev/ashmem/dalvik-non moving space (deleted)
b9a8e000-b9a93000 r-xp 00000000 103:11 1956                              /system/bin/app_process32
//...
ec597000-ec599000 r--p 00015000 103:11 550                               /system/lib/libandroid.so
ec599000-ec59a000 rw-p 00017000 103:11 550                               /system/lib/libandroid.so
ec59e000-ec5e8000 r--s 00000000 103:11 2209                              /system/fonts/RobotoCondensed-Regular.ttf
ec5e8000-ec6b0000 rw-p 00000000 00:04 22618                              /dev/ashmem/dalvik-indirect ref table (deleted)
ec6b0000-ec778000 rw-p 00000000 00:04 22617                              /dev/ashmem/dalvik-indirect ref table (deleted)
ec778000-ec978000 rw-p 00000000 00:04 19288                              /dev/ashmem/dalvik-rb copying gc mark stack (deleted)
ec978000-ed178000 rw-p 00000000 00:04 19287                              /dev/ashmem/dalvik-concurrent copying gc mark stack (deleted)
ed178000-ed979000 rw-p 00000000 00:04 19286                              /dev/ashmem/dalvik-live stack (deleted)
ed979000-ee17a000 rw-p 00000000 00:04 19285                              /dev/ashmem/dalvik-allocation stack (deleted)
ee17a000-ee57b000 rw-p 00000000 00:04 19283                              /dev/ashmem/dalvik-card table (deleted)
ee57b000-ef57b000 rw-p 00000000 00:04 19280                              /dev/ashmem/dalvik-region space live bitmap (deleted)
ef57b000-ef67b000 rw-p 00000000 00:04 19278                              /dev/ashmem/dalvik-allocspace zygote / non moving space mark-bitmap 0 (deleted)
ef67b000-ef77b000 rw-p 00000000 00:04 19277                              /dev/ashmem/dalvik-allocspace zygote / non moving space live-bitmap 0 (deleted)
ef77b000-efa07000 r--s 00000000 103:11 2353                              /system/framework/arm/boot-telephony-common.vdex

f1390000-f1884000 r--s 00000000 103:11 2376                              /system/framework/arm/boot.vdex
f1884000-f193d000 r-xp 00000000 103:11 481                               /system/lib/libart.so
//...
ec52c000-ec54c000 rw-p 00000000 00:04 75781                              /dev/ashmem/dalvik-LinearAlloc (deleted)
//...
f2303000-f2323000 rw-p 00000000 00:04 45872                              /dev/ashmem/dalvik-LinearAlloc (deleted)

以上对进程内存映射日志的输出大概可以表示为下图 image.png 可以看到最开始的Space名称为dalvik-main space,它的开始位置总是为12c00_000 对应Heap对象的 main_space_变量,再之后是由映射在 固定位置7000_0000 +上一个随机数后的地址的ImageSpace ,可以看到这部分空间主要加载的是oat 和art文件。再接下来是在preZygoteFork函数中由non_moving_space_分离出来的 dalvik-zygote space、dalvik-non moving space。

为Java对象分配内存流程

最后我们再来看下创建一个Java对象时Heap类为其分配内存的函数AllocObject的执行流程

  // Allocates and initializes storage for an object instance.
  template <bool kInstrumented = true, typename PreFenceVisitor>
  mirror::Object* AllocObject(Thread* self,
                              ObjPtr<mirror::Class> klass,
                              size_t num_bytes,
                              const PreFenceVisitor& pre_fence_visitor)
      REQUIRES_SHARED(Locks::mutator_lock_)
      REQUIRES(!*gc_complete_lock_,
               !*pending_task_lock_,
               !*backtrace_lock_,
               !process_state_update_lock_,
               !Roles::uninterruptible_) {
    return AllocObjectWithAllocator<kInstrumented>(self,
                                                   klass,
                                                   num_bytes,
                                                   GetCurrentAllocator(),
                                                   pre_fence_visitor);
  }

AllocObject函数内部又调用AllocObjectWithAllcator函数

template <bool kInstrumented, bool kCheckLargeObject, typename PreFenceVisitor>
inline mirror::Object* Heap::AllocObjectWithAllocator(Thread* self,
                                                      ObjPtr<mirror::Class> klass,
                                                      size_t byte_count,
                                                      AllocatorType allocator,
                                                      const PreFenceVisitor& pre_fence_visitor) {
 //...内部进行内存分配的监控、回调alloc_listner_的PreObjectAllocated函数
  auto pre_object_allocated = [&]() REQUIRES_SHARED(Locks::mutator_lock_)
      REQUIRES(!Roles::uninterruptible_) {
    if constexpr (kInstrumented) {
      AllocationListener* l = alloc_listener_.load(std::memory_order_seq_cst);
      if (UNLIKELY(l != nullptr) && UNLIKELY(l->HasPreAlloc())) {
        StackHandleScope<1> hs(self);
        HandleWrapperObjPtr<mirror::Class> h_klass(hs.NewHandleWrapper(&klass));
        l->PreObjectAllocated(self, h_klass, &byte_count);
      }
    }
  };
  ObjPtr<mirror::Object> obj; //新创建的obj指针
  size_t bytes_allocated;
  size_t usable_size;
  size_t new_num_bytes_allocated = 0;
  {
    // Do the initial pre-alloc
    pre_object_allocated();
    ScopedAssertNoThreadSuspension ants("Called PreObjectAllocated, no suspend until alloc");

    // Need to check that we aren't the large object allocator since the large object allocation
    // code path includes this function. If we didn't check we would have an infinite loop.
    if (kCheckLargeObject && UNLIKELY(ShouldAllocLargeObject(klass, byte_count))) {
      // AllocLargeObject can suspend and will recall PreObjectAllocated if needed.
      ScopedAllowThreadSuspension ats;
      obj = AllocLargeObject<kInstrumented, PreFenceVisitor>(self, &klass, byte_count,
                                                             pre_fence_visitor);
      if (obj != nullptr) {
        return obj.Ptr();
      }
      // There should be an OOM exception, since we are retrying, clear it.
      self->ClearException();
		
      //如果 large object allocation分配失败、尝试使用其他space进行分配
      // If the large object allocation failed, try to use the normal spaces (main space,
      // non moving space). This can happen if there is significant virtual address space
      // fragmentation.
      pre_object_allocated();
    }
    if (IsTLABAllocator(allocator)) { //判断allocator 是否为 kAllocatorTypeTLAB 或 kAllocatorTypeRegionTLAB
      byte_count = RoundUp(byte_count, space::BumpPointerSpace::kAlignment);
    }
    // If we have a thread local allocation we don't need to update bytes allocated.
    if (IsTLABAllocator(allocator) && byte_count <= self->TlabSize()) {
      obj = self->AllocTlab(byte_count);
      DCHECK(obj != nullptr) << "AllocTlab can't fail";
      obj->SetClass(klass);
      if (kUseBakerReadBarrier) {
        obj->AssertReadBarrierState();
      }
      bytes_allocated = byte_count;
      usable_size = bytes_allocated;
      no_suspend_pre_fence_visitor(obj, usable_size);
      QuasiAtomic::ThreadFenceForConstructor();
    } else if (
        !kInstrumented && allocator == kAllocatorTypeRosAlloc &&
        (obj = rosalloc_space_->AllocThreadLocal(self, byte_count, &bytes_allocated)) != nullptr &&
        LIKELY(obj != nullptr)) {
      DCHECK(!is_running_on_memory_tool_);
      obj->SetClass(klass);
      if (kUseBakerReadBarrier) {
        obj->AssertReadBarrierState();
      }
      usable_size = bytes_allocated;
      no_suspend_pre_fence_visitor(obj, usable_size);
      QuasiAtomic::ThreadFenceForConstructor();
    } else {
      // Bytes allocated that includes bulk thread-local buffer allocations in addition to direct
      // non-TLAB object allocations.
      size_t bytes_tl_bulk_allocated = 0u;
        //调用TryToAllocate尝试进行内存分配,如果是使用了thread-local buffer则会记录tl分配的大小
      obj = TryToAllocate<kInstrumented, false>(self, allocator, byte_count, &bytes_allocated,
                                                &usable_size, &bytes_tl_bulk_allocated);
      if (UNLIKELY(obj == nullptr)) {
        // AllocateInternalWithGc can cause thread suspension, if someone instruments the
        // entrypoints or changes the allocator in a suspend point here, we need to retry the
        // allocation. It will send the pre-alloc event again.
         //分配失败,执行一次GC后重新尝试 分配
        obj = AllocateInternalWithGc(self,
                                     allocator,
                                     kInstrumented,
                                     byte_count,
                                     &bytes_allocated,
                                     &usable_size,
                                     &bytes_tl_bulk_allocated,
                                     &klass);
        if (obj == nullptr) {
            //如果分配失败,则没有发生异常 则尝试重新执行AllocObject
          // The only way that we can get a null return if there is no pending exception is if the
          // allocator or instrumentation changed.
          if (!self->IsExceptionPending()) {
            // Since we are restarting, allow thread suspension.
            ScopedAllowThreadSuspension ats;
            // AllocObject will pick up the new allocator type, and instrumented as true is the safe
            // default.
            return AllocObject</*kInstrumented=*/true>(self,
                                                       klass,
                                                       byte_count,
                                                       pre_fence_visitor);
          }
          return nullptr;
        }
      }
      DCHECK_GT(bytes_allocated, 0u);
      DCHECK_GT(usable_size, 0u);
      obj->SetClass(klass);
      if (kUseBakerReadBarrier) {
        obj->AssertReadBarrierState();
      }
      if (collector::SemiSpace::kUseRememberedSet &&
          UNLIKELY(allocator == kAllocatorTypeNonMoving)) {
        // (Note this if statement will be constant folded away for the fast-path quick entry
        // points.) Because SetClass() has no write barrier, the GC may need a write barrier in the
        // case the object is non movable and points to a recently allocated movable class.
        WriteBarrier::ForFieldWrite(obj, mirror::Object::ClassOffset(), klass);
      }
      no_suspend_pre_fence_visitor(obj, usable_size);
      QuasiAtomic::ThreadFenceForConstructor();
      if (bytes_tl_bulk_allocated > 0) {
        size_t num_bytes_allocated_before =
            num_bytes_allocated_.fetch_add(bytes_tl_bulk_allocated, std::memory_order_relaxed);
        new_num_bytes_allocated = num_bytes_allocated_before + bytes_tl_bulk_allocated;
        // Only trace when we get an increase in the number of bytes allocated. This happens when
        // obtaining a new TLAB and isn't often enough to hurt performance according to golem.
        if (region_space_) {
          // With CC collector, during a GC cycle, the heap usage increases as
          // there are two copies of evacuated objects. Therefore, add evac-bytes
          // to the heap size. When the GC cycle is not running, evac-bytes
          // are 0, as required.
          TraceHeapSize(new_num_bytes_allocated + region_space_->EvacBytes());
        } else {
          TraceHeapSize(new_num_bytes_allocated);
        }
      }
    }
  }
  if (kIsDebugBuild && Runtime::Current()->IsStarted()) {
    CHECK_LE(obj->SizeOf(), usable_size);
  }
  // TODO: Deprecate.
  if (kInstrumented) {
    if (Runtime::Current()->HasStatsEnabled()) {
      RuntimeStats* thread_stats = self->GetStats();
      ++thread_stats->allocated_objects;
      thread_stats->allocated_bytes += bytes_allocated;
      RuntimeStats* global_stats = Runtime::Current()->GetStats();
      ++global_stats->allocated_objects;
      global_stats->allocated_bytes += bytes_allocated;
    }
  } else {
    DCHECK(!Runtime::Current()->HasStatsEnabled());
  }
  if (kInstrumented) {
    if (IsAllocTrackingEnabled()) {
      // allocation_records_ is not null since it never becomes null after allocation tracking is
      // enabled.
      DCHECK(allocation_records_ != nullptr);
      allocation_records_->RecordAllocation(self, &obj, bytes_allocated);
    }
    AllocationListener* l = alloc_listener_.load(std::memory_order_seq_cst);
    if (l != nullptr) {
      // Same as above. We assume that a listener that was once stored will never be deleted.
      // Otherwise we'd have to perform this under a lock.
      l->ObjectAllocated(self, &obj, bytes_allocated);
    }
  } else {
    DCHECK(!IsAllocTrackingEnabled());
  }
  if (AllocatorHasAllocationStack(allocator)) {
    PushOnAllocationStack(self, &obj);
  }
  if (kInstrumented) {
    if (gc_stress_mode_) {
      CheckGcStressMode(self, &obj);
    }
  } else {
    DCHECK(!gc_stress_mode_);
  }
  // IsGcConcurrent() isn't known at compile time so we can optimize by not checking it for
  // the BumpPointer or TLAB allocators. This is nice since it allows the entire if statement to be
  // optimized out. And for the other allocators, AllocatorMayHaveConcurrentGC is a constant since
  // the allocator_type should be constant propagated.
  if (AllocatorMayHaveConcurrentGC(allocator) && IsGcConcurrent()) {
    // New_num_bytes_allocated is zero if we didn't update num_bytes_allocated_.
    // That's fine.
    CheckConcurrentGCForJava(self, new_num_bytes_allocated, &obj);
  }
  VerifyObject(obj);
  self->VerifyStack();
  return obj.Ptr();
}

//分配大内存对象
template <bool kInstrumented, typename PreFenceVisitor>
inline mirror::Object* Heap::AllocLargeObject(Thread* self,
                                              ObjPtr<mirror::Class>* klass,
                                              size_t byte_count,
                                              const PreFenceVisitor& pre_fence_visitor) {
  // Save and restore the class in case it moves.
  StackHandleScope<1> hs(self);
  auto klass_wrapper = hs.NewHandleWrapper(klass);
  //allocatorType设置为kAllocatorTypeLOS,并且重新调用 AllocObjectWithAllocator (kCheckLargeObject 为false)
  return AllocObjectWithAllocator<kInstrumented, false, PreFenceVisitor>(self, *klass, byte_count,
                                                                         kAllocatorTypeLOS,
                                                                         pre_fence_visitor);
}

//
  • 首先判断该对象所需的内存是否大于 Large Object的定义,如果大于则调用AllocLargeObject分配内存
    • AllocLargeObject函数中又会重新调用 AllocObjectWithAllocator,不过调用前将AllocatorType参数设置为kAllocatorTypeLOS,泛型参数kCheckLargeObject设置为false
    • 如果Large Space分配失败会继续往下执行,尝试使用其他Space进行分配
  • 如果allocator是 RosAlloc则调用RosAllocSpace的AllocThreadLocal函数进行内存分配
  • 其他情况下会先调用TryToAllocate函数进行内存分配
    • TryToAllocate内部 根据不同的allocatortype尝试从不同的Space中分配内存
    • 如果TryToAllocate分配失败,则调用AllocateInternalWithGc尝试进行GC后再次进行内存分配(内部还有很多不同粒度尝试分配内存的逻辑)
      • 如果AllocateInternalWithGc 还是分配失败,且没有发生异常,此时AllocatorType应该已经发生变化了,所以重新调用AllocObject
  • 当TryToAllocate返回null则说明分配失败了

后续

本文只分析了虚拟机 Heap是如何提供内存资源的,后续会继续分析 gc中collector(垃圾收集)的部分,来了解虚拟机如何检测垃圾并执行清理操作。最后再介绍应用层对虚拟机内存相关性能检测(gc监控、内存分配释放监控)的思路及手段(jvmti、native hook等)。