目录
1. 概述
1.1.为什么在默认版本中使用它
2. 数据结构
3. 流程分析
3.1 CMA区域创建
3.1.1 方式一 根据dts来配置
3.1.2 方式二 根据参数或宏配置
3.2 CMA添加到Buddy System
3.3 CMA分配/释放
3.4 DMA使用
4.CMA利弊
4.1.优点
4.2.缺点
5.为什么要摆脱CMA
5.1.如何摆脱CMA
5.1.1.启用连贯池
5.1.2. DMA分配技巧
5.1.3.离子分配器
6.CMA documentation file
7.后记
8.推荐阅读
cma,全称(contiguous memory allocation),在内存初始化时预留一块连续内存,可以在内存碎片化严重时通过调用dma_alloc_contiguous接口并且gfp指定为__GFP_DIRECT_RECLAIM从预留的那块连续内存中分配大块连续内存。
1. 概述
Contiguous Memory Allocator, CMA
,连续内存分配器,用于分配连续的大块内存。
连续内存分配器(CMA)是一个框架,它允许为物理连续内存管理设置特定于计算机的配置。然后根据该配置分配设备的内存。该框架的主要作用不是分配内存,而是解析和管理内存配置,并充当设备驱动程序和可插拔分配器之间的中介。因此,它不依赖于任何内存分配方法或策略。
嵌入式系统上的各种设备不支持散点图和/或IO映射,因此需要连续的内存块才能运行。它们包括相机,硬件视频解码器和编码器等设备。此类设备通常需要大的内存缓冲区(例如,全高清帧的大小大于2兆像素,即,内存大小超过6 MB),这使得kmalloc()之类的机制无效。一些嵌入式设备对缓冲区提出了额外的要求,例如,它们只能在分配给特定位置/内存库(如果系统具有多个内存库)的缓冲区或与特定内存边界对齐的缓冲区上运行。嵌入式设备的开发最近出现了很大的增长(尤其是在V4L领域),许多这样的驱动程序都包含了自己的内存分配代码。他们中的大多数使用基于bootmem的方法。
CMA分配器
,会Reserve一片物理内存区域:
- 设备驱动不用时,内存管理系统将该区域用于分配和管理可移动类型页面;
- 设备驱动使用时,用于连续内存分配,此时已经分配的页面需要进行迁移;
此外,CMA分配器
还可以与DMA子系统
集成在一起,使用DMA的设备驱动程序无需使用单独的CMA API
。
《搞懂Linux零拷贝,DMA》https://rtoax.blog.csdn.net/article/details/108825666
《ARM SMMU原理与IOMMU技术(“VT-d” DMA、I/O虚拟化、内存虚拟化)》
1.1.为什么在默认版本中使用它
大多数i.MX SoC都没有针对特定IP的IOMMU,后者需要较大的连续内存来进行操作,例如VPU / GPU / ISI / CSI。或者他们有IOMMU,但是性能却不怎么样。在默认的i.MX BSP中,我们仍然为那些IP驱动程序分配物理连续内存,以进行DMA传输。
在arm64内核中,DMA分配API将以各种方式分配内存,具体取决于设备配置(在dts或gfp标志中)。下表显示了DMA分配API(不带IOMMU的设备)如何工作以找到正确的页面方式(按顺序,连贯池-> CMA->好友-> SWIOTLB):
Allocator (by order) | Configurations (w/o IOMMU) | Comments | Mapping |
---|---|---|---|
Coherent Pool |
|
By __alloc_from_pool() | Already mapped on boot when coherent pool init in VMALLOC |
CMA |
|
By cma_alloc() | map_vm_area, mapped in VMALLOC |
Buddy |
|
By __get_free_pages(), which can only allocate from the DMA/normal zone (lowmem), 32bits address spaces | Already mapped in the lowmem area by kernel on boot |
SWIOTLB |
|
By map_single() | Already mapped on boot when SWIOTLB init |
翻译一下:
分配者(按订单) | 配置(不带IOMMU) | 注释评论 | 映射 |
---|---|---|---|
相干池Coherent Pool |
|
通过__alloc_from_pool() | 在VMALLOC中进行相关池初始化时已在启动时映射 |
CMA |
|
通过cma_alloc() | map_vm_area,在VMALLOC中映射 |
伙伴Buddy |
|
通过只能从DMA /正常区域(lowmem)分配的__get_free_pages(),可以使用32位地址空间 | 引导时已经由内核映射到lowmem区域 |
SWIOTLB |
|
由map_single() | SWIOTLB初始化时已在启动时映射 |
还显示了它的工作原理(DMA分配路径):
默认情况下,在大多数情况下,内核使用CMA作为DMA缓冲区分配的后端。这就是为什么i.MX BSP在默认版本中将CMA用于GPU / VPU / CSI / ISI或其他用于DMA传输的缓冲区的原因。
https://rtoax.blog.csdn.net/article/details/109558498
2. 数据结构
内核定义了struct cma
结构,用于管理一个CMA区域
,此外还定义了全局的cma数组
,如下:
struct cma {unsigned long base_pfn;unsigned long count;unsigned long *bitmap;unsigned int order_per_bit; /* Order of pages represented by one bit */struct mutex lock;
#ifdef CONFIG_CMA_DEBUGFSstruct hlist_head mem_head;spinlock_t mem_head_lock;
#endifconst char *name;
};extern struct cma cma_areas[MAX_CMA_AREAS];
extern unsigned cma_area_count;
base_pfn
:CMA区域物理地址的起始页帧号;count
:CMA区域总体的页数;*bitmap
:位图,用于描述页的分配情况;order_per_bit
:位图中每个bit
描述的物理页面的order
值,其中页面数为2^order
值;
来一张图就会清晰明了:
3. 流程分析
3.1 CMA区域创建
3.1.1 方式一 根据dts来配置
之前的文章也都分析过,物理内存的描述放置在dts
中,最终会在系统启动过程中,对dtb
文件进行解析,从而完成内存信息注册。
CMA
的内存在dts
中的描述示例如下图:
在dtb
解析过程中,会调用到rmem_cma_setup
函数:
RESERVEDMEM_OF_DECLARE(cma, "shared-dma-pool", rmem_cma_setup);
3.1.2 方式二 根据参数或宏配置
可以通过内核参数或配置宏,来进行CMA区域的创建,最终会调用到cma_declare_contiguous
函数,如下图:
3.2 CMA添加到Buddy System
在创建完CMA区域
后,该内存区域成了保留区域,如果单纯给驱动使用,显然会造成内存的浪费,因此内存管理模块会将CMA区域
添加到Buddy System
中,用于可移动页面的分配和管理。CMA区域
是通过cma_init_reserved_areas
接口来添加到Buddy System
中的。
core_initcall(cma_init_reserved_areas);
core_initcall
宏将cma_init_reserved_areas
函数放置到特定的段中,在系统启动的时候会调用到该函数。
3.3 CMA分配/释放
CMA分配,入口函数为cma_alloc
:
CMA释放,入口函数为cma_release
:函数比较简单,直接贴上代码
/*** cma_release() - release allocated pages* @cma: Contiguous memory region for which the allocation is performed.* @pages: Allocated pages.* @count: Number of allocated pages.** This function releases memory allocated by alloc_cma().* It returns false when provided pages do not belong to contiguous area and* true otherwise.*/
bool cma_release(struct cma *cma, const struct page *pages, unsigned int count)
{unsigned long pfn;if (!cma || !pages)return false;pr_debug("%s(page %p)\n", __func__, (void *)pages);pfn = page_to_pfn(pages);if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)return false;VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);free_contig_range(pfn, count);cma_clear_bitmap(cma, pfn, count);trace_cma_release(pfn, pages, count);return true;
}
3.4 DMA使用
代码参考driver/base/dma-contiguous.c
,主要包括的接口有:
/*** dma_alloc_from_contiguous() - allocate pages from contiguous area* @dev: Pointer to device for which the allocation is performed.* @count: Requested number of pages.* @align: Requested alignment of pages (in PAGE_SIZE order).* @gfp_mask: GFP flags to use for this allocation.** This function allocates memory buffer for specified device. It uses* device specific contiguous memory area if available or the default* global one. Requires architecture specific dev_get_cma_area() helper* function.*/
struct page *dma_alloc_from_contiguous(struct device *dev, size_t count,unsigned int align, gfp_t gfp_mask);/*** dma_release_from_contiguous() - release allocated pages* @dev: Pointer to device for which the pages were allocated.* @pages: Allocated pages.* @count: Number of allocated pages.** This function releases memory allocated by dma_alloc_from_contiguous().* It returns false when provided pages do not belong to contiguous area and* true otherwise.*/
bool dma_release_from_contiguous(struct device *dev, struct page *pages,int count);
在上述的接口中,实际调用的就是cma_alloc/cma_release
接口来实现的。
整体来看,CMA分配器还是比较简单易懂,也不再深入分析。
4.CMA利弊
4.1.优点
- 精心设计,即使在内存碎片情况下也可用于大型连续内存分配。
- CMA中的页面可以由伙伴系统共享,而不是保留池共享
- 可以是特定于设备的CMA区域,仅由该设备使用并与系统共享
- 无需重新编译内核即可轻松配置它的启动地址和大小
4.2.缺点
- 需要迁移页面时分配过程变慢
- 容易被系统内存分配破坏。当系统内存不足时,客户可能会遇到cma_alloc故障,这会在前台应用程序需要图形缓冲区进行渲染而RVC希望缓冲区用于CAR反向时导致不良的用户体验。
- 当cma_alloc()需要迁移某些页面时仍可能出现死锁,而这些页面仍在刷新到存储中(当FUSE文件系统在回写路径中有一页时,某些客户已经遇到了死锁,而cma_alloc希望迁移它)。这是编写此文档的最初动机。
5.为什么要摆脱CMA
上面的RED Cons声明。关键是为关键的分配路径(例如GPU图形缓冲区和摄像机/ VPU预览/记录缓冲区)保留内存,以防止分配失败而导致良好的用户体验,分配失败会导致黑屏,预览卡死等。 CMA和FUSE一起工作。
5.1.如何摆脱CMA
要摆脱CMA,基本思想是在DMA分配中切断CMA方式,转向相干池(原子池)。请注意,连贯池只能由DMA分配API使用,它不会与系统伙伴共享。
5.1.1.启用连贯池
在命令行中添加“ coherent_pool = <size>”,Coherent池实际上是从系统默认CMA分配的,因此CMA size> coherent_pool。
此大小没有参考,因为从系统到系统以及用例到用例各有不同:
- DMA的最大消耗者是GPU,其使用情况可以通过gmem_info工具进行监控。在典型的用例下监视gmem_info,并确定GPU所需的内存。
- 检查DMA的第二个使用者:ISI /相机,取决于V4l2 reqbuf的大小和数量
- 检查VPU,取决于多媒体框架
- 加上alsa snd,USB,fec使用
必须通过测试验证大小,以确保系统稳定。
5.1.2. DMA分配技巧
修改至arch / arm64 / mm / dma-mapping.c,在__dma_alloc()函数中删除gfpflags_allow_blocking检查:
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 7015d3e..ef30b46 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -147,7 +147,7 @@ static void *__dma_alloc(struct device *dev, size_t size,size = PAGE_ALIGN(size);- if (!coherent && !gfpflags_allow_blocking(flags)) {
+ if (!coherent) { // && !gfpflags_allow_blocking(flags)) {
struct page *page = NULL;
void *addr = __alloc_from_pool(size, &page, flags);
5.1.3.离子分配器
在Android和Yocto版本中,ION分配器(Android临时驱动程序)都用于VPU缓冲区。它默认进入ION CMA堆。这意味着ION对连续内存的请求直接发送给CMA。为了避免CMA,我们可以在ION中使用分割堆栈而不是CMA堆栈:
5.1.3.1安卓
启用CARVEOUT堆,禁用CMA堆:
CONFIG_ION = y
CONFIG_ION_SYSTEM_HEAP = y
-CONFIG_ION_CMA_HEAP = y
+ CONFIG_ION_CARVEOUT_HEAP = y
+ CONFIG_ION_CMA_HEAP = n
在dts中调整分割保留的堆基地址和大小:
/ {reserved-memory {#address-cells = <2>;#size-cells = <2>;ranges;carveout_region: imx_ion@0 {compatible = "imx-ion-pool";reg = <0x0 0xf8000000 0 0x8000000>;};};
};
5.1.3.2 Linux
- 内核-请参阅i.MX8QM的随附补丁。与Linux几乎相同,但需要对ION分离堆驱动程序进行修补。
- Gstreamer-应用以下补丁从分配中分配:
yocto / build-8qm / tmp / work / aarch64-mx8-poky-linux / gstreamer1.0-plugins-base / 1.14.4.imx-r0 / git:
diff --git a/gst-libs/gst/allocators/gstionmemory.c b/gst-libs/gst/allocators/gstionmemory.c
index 1218c4a..12e403d 100644
--- a/gst-libs/gst/allocators/gstionmemory.c
+++ b/gst-libs/gst/allocators/gstionmemory.c
@@ -227,7 +227,8 @@ gst_ion_alloc_alloc (GstAllocator * allocator, gsize size,}for (gint i=0; i<heapCnt; i++) {
- if (ihd[i].type == ION_HEAP_TYPE_DMA) {
+ if (ihd[i].type == ION_HEAP_TYPE_DMA ||
+ ihd[i].type == ION_HEAP_TYPE_CARVEOUT) {heap_mask |= 1 << ihd[i].heap_id;}}
6.CMA documentation file
https://lwn.net/Articles/396707/
* Contiguous Memory AllocatorThe Contiguous Memory Allocator (CMA) is a framework, which allowssetting up a machine-specific configuration for physically-contiguousmemory management. Memory for devices is then allocated accordingto that configuration.The main role of the framework is not to allocate memory, but toparse and manage memory configurations, as well as to act as anin-between between device drivers and pluggable allocators. It isthus not tied to any memory allocation method or strategy.** Why is it needed?Various devices on embedded systems have no scatter-getter and/orIO map support and as such require contiguous blocks of memory tooperate. They include devices such as cameras, hardware videodecoders and encoders, etc.Such devices often require big memory buffers (a full HD frame is,for instance, more then 2 mega pixels large, i.e. more than 6 MBof memory), which makes mechanisms such as kmalloc() ineffective.Some embedded devices impose additional requirements on thebuffers, e.g. they can operate only on buffers allocated inparticular location/memory bank (if system has more than onememory bank) or buffers aligned to a particular memory boundary.Development of embedded devices have seen a big rise recently(especially in the V4L area) and many such drivers include theirown memory allocation code. Most of them use bootmem-based methods.CMA framework is an attempt to unify contiguous memory allocationmechanisms and provide a simple API for device drivers, whilestaying as customisable and modular as possible.** DesignThe main design goal for the CMA was to provide a customisable andmodular framework, which could be configured to suit the needs ofindividual systems. Configuration specifies a list of memoryregions, which then are assigned to devices. Memory regions canbe shared among many device drivers or assigned exclusively toone. This has been achieved in the following ways:1. The core of the CMA does not handle allocation of memory andmanagement of free space. Dedicated allocators are used forthat purpose.This way, if the provided solution does not match demandsimposed on a given system, one can develop a new algorithm andeasily plug it into the CMA framework.The presented solution includes an implementation of a best-fitalgorithm.2. CMA allows a run-time configuration of the memory regions itwill use to allocate chunks of memory from. The set of memoryregions is given on command line so it can be easily changedwithout the need for recompiling the kernel.Each region has it's own size, alignment demand, a startaddress (physical address where it should be placed) and anallocator algorithm assigned to the region.This means that there can be different algorithms running atthe same time, if different devices on the platform havedistinct memory usage characteristics and different algorithmmatch those the best way.3. When requesting memory, devices have to introduce themselves.This way CMA knows who the memory is allocated for. Thisallows for the system architect to specify which memory regionseach device should use.3a. Devices can also specify a "kind" of memory they want.This makes it possible to configure the system in sucha way, that a single device may get memory from differentmemory regions, depending on the "kind" of memory itrequested. For example, a video codec driver might want toallocate some shared buffers from the first memory bank andthe other from the second to get the highest possiblememory throughput.** Use casesLets analyse some imaginary system that uses the CMA to see howthe framework can be used and configured.We have a platform with a hardware video decoder and a camera eachneeding 20 MiB of memory in worst case. Our system is written insuch a way though that the two devices are never used at the sametime and memory for them may be shared. In such a system thefollowing two command line arguments would be used:cma=r=20M cma_map=video,camera=rThe first instructs CMA to allocate a region of 20 MiB and use thefirst available memory allocator on it. The second, that driversnamed "video" and "camera" are to be granted memory from thepreviously defined region.We can see, that because the devices share the same region ofmemory, we save 20 MiB of memory, compared to the situation wheneach of the devices would reserve 20 MiB of memory for itself.However, after some development of the system, it can now runvideo decoder and camera at the same time. The 20 MiB region isno longer enough for the two to share. A quick fix can be made togrant each of those devices separate regions:cma=v=20M,c=20M cma_map=video=v;camera=cThis solution also shows how with CMA you can assign private poolsof memory to each device if that is required.Allocation mechanisms can be replaced dynamically in a similarmanner as well. Let's say that during testing, it has beendiscovered that, for a given shared region of 40 MiB,fragmentation has become a problem. It has been observed that,after some time, it becomes impossible to allocate buffers of therequired sizes. So to satisfy our requirements, we would have toreserve a larger shared region beforehand.But fortunately, you have also managed to develop a new allocationalgorithm -- Neat Allocation Algorithm or "na" for short -- whichsatisfies the needs for both devices even on a 30 MiB region. Theconfiguration can be then quickly changed to:cma=r=30M:na cma_map=video,camera=rThis shows how you can develop your own allocation algorithms ifthe ones provided with CMA do not suit your needs and easilyreplace them, without the need to modify CMA core or evenrecompiling the kernel.** Technical Details*** The command line parametersAs shown above, CMA is configured from command line via twoarguments: "cma" and "cma_map". The first one specifies regionsthat are to be reserved for CMA. The second one specifies whatregions each device is assigned to.The format of the "cma" parameter is as follows:cma ::= "cma=" regions [ ';' ]regions ::= region [ ';' regions ]region ::= reg-name'=' size[ '@' start ][ '/' alignment ][ ':' [ alloc-name ] [ '(' alloc-params ')' ] ]reg-name ::= a sequence of letters and digits// name of the regionsize ::= memsize // size of the regionstart ::= memsize // desired start address of// the regionalignment ::= memsize // alignment of the start// address of the regionalloc-name ::= a non-empty sequence of letters and digits// name of an allocator that will be used// with the regionalloc-params ::= a sequence of chars other then ')' and ';'// optional parameters for the allocatormemsize ::= whatever memparse() acceptsThe format of the "cma_map" parameter is as follows:cma-map ::= "cma_map=" rules [ ';' ]rules ::= rule [ ';' rules ]rule ::= patterns '=' regionspatterns ::= pattern [ ',' patterns ]regions ::= reg-name [ ',' regions ]// list of regions to try to allocate memory// from for devices that match patternpattern ::= dev-pattern [ '/' kind-pattern ]| '/' kind-pattern// pattern request must match for this rule to// apply to it; the first rule that matches is// applied; if dev-pattern part is omitted// value identical to the one used in previous// pattern is assumeddev-pattern ::= pattern-str// pattern that device name must match for the// rule to apply.kind-pattern ::= pattern-str// pattern that "kind" of memory (provided by// device) must match for the rule to apply.pattern-str ::= a non-empty sequence of characters with '?'meaning any character and possible '*' atthe end meaning to match the rest of thestringSome examples (whitespace added for better readability):cma = r1 = 64M // 64M region@512M // starting at address 512M// (or at least as near as possible)/1M // make sure it's aligned to 1M:foo(bar); // uses allocator "foo" with "bar"// as parameters for itr2 = 64M // 64M region/1M; // make sure it's aligned to 1M// uses the first available allocatorr3 = 64M // 64M region@512M // starting at address 512M:foo; // uses allocator "foo" with no parameterscma_map = foo = r1;// device foo with kind==NULL uses region r1foo/quaz = r2; // OR:/quaz = r2;// device foo with kind == "quaz" uses region r2foo/* = r3; // OR:/* = r3;// device foo with any other kind uses region r3bar/* = r1,r2;// device bar with any kind uses region r1 or r2baz?/a* , baz?/b* = r3;// devices named baz? where ? is any character// with kind being a string starting with "a" or// "b" use r3*** The device and kind of memoryThe name of the device is taken form the device structure. It isnot possible to use CMA if driver does not register a device(actually this can be overcome if a fake device structure isprovided with at least the name set).The kind of memory is an optional argument provided by the devicewhenever it requests memory chunk. In many cases this can beignored but sometimes it may be required for some devices.For instance, let say that there are two memory banks and forperformance reasons a device uses buffers in both of them. Insuch case, the device driver would define two kinds and use it fordifferent buffers. Command line arguments could look as follows:cma=a=32M@0,b=32M@512M cma_map=foo/a=a;foo/b=bAnd whenever the driver allocated the memory it would specify thekind of memory:buffer1 = cma_alloc(dev, 1 << 20, 0, "a");buffer2 = cma_alloc(dev, 1 << 20, 0, "b");If it was needed to try to allocate from the other bank as well ifthe dedicated one is full command line arguments could be changedto:cma=a=32M@0,b=32M@512M cma_map=foo/a=a,b;foo/b=b,aOn the other hand, if the same driver was used on a system withonly one bank, the command line could be changed to:cma=r=64M cma_map=foo/*=rwithout the need to change the driver at all.*** APIThere are four calls provided by the CMA framework to devices. Toallocate a chunk of memory cma_alloc() function needs to be used:unsigned long cma_alloc(const struct device *dev,const char *kind,unsigned long size,unsigned long alignment);If required, device may specify alignment that the chunk need tosatisfy. It have to be a power of two or zero. The chunks arealways aligned at least to a page.The kind specifies the kind of memory as described to in theprevious subsection. If device driver does not use notion ofmemory kinds it's safe to pass NULL as the kind.The basic usage of the function is just a:addr = cma_alloc(dev, NULL, size, 0);The function returns physical address of allocated chunk ora value that evaluated true if checked with IS_ERR_VALUE(), so thecorrect way for checking for errors is:unsigned long addr = cma_alloc(dev, size);if (IS_ERR_VALUE(addr))return (int)addr;/* Allocated */(Make sure to include <linux/err.h> which contains the definitionof the IS_ERR_VALUE() macro.)Allocated chunk is freed via a cma_put() function:int cma_put(unsigned long addr);It takes physical address of the chunk as an argument anddecreases it's reference counter. If the counter reaches zero thechunk is freed. Most of the time users do not need to think aboutreference counter and simply use the cma_put() as a free call.If one, however, were to share a chunk with others built inreference counter may turn out to be handy. To increment it, oneneeds to use cma_get() function:int cma_put(unsigned long addr);The last function is the cma_info() which returns informationabout regions assigned to given (dev, kind) pair. Its syntax is:int cma_info(struct cma_info *info,const struct device *dev,const char *kind);On successful exit it fills the info structure with lower andupper bound of regions, total size and number of regions assignedto given (dev, kind) pair.*** Allocator operationsCreating an allocator for CMA needs four functions to beimplemented.The first two are used to initialise an allocator far given driverand clean up afterwards:int cma_foo_init(struct cma_region *reg);void cma_foo_done(struct cma_region *reg);The first is called during platform initialisation. Thecma_region structure has saved starting address of the region aswell as its size. It has also alloc_params field with optionalparameters passed via command line (allocator is free to interpretthose in any way it pleases). Any data that allocate associatedwith the region can be saved in private_data field.The second call cleans up and frees all resources the allocatorhas allocated for the region. The function can assume that allchunks allocated form this region have been freed thus the wholeregion is free.The two other calls are used for allocating and freeing chunks.They are:struct cma_chunk *cma_foo_alloc(struct cma_region *reg,unsigned long size,unsigned long alignment);void cma_foo_free(struct cma_chunk *chunk);As names imply the first allocates a chunk and the other freesa chunk of memory. It also manages a cma_chunk objectrepresenting the chunk in physical memory.Either of those function can assume that they are the only threadaccessing the region. Therefore, allocator does not need to worryabout concurrency.When allocator is ready, all that is left is register it by addinga line to "mm/cma-allocators.h" file:CMA_ALLOCATOR("foo", foo)The first "foo" is a named that will be available to use withcommand line argument. The second is the part used in functionnames.*** Integration with platformThere is one function that needs to be called form platforminitialisation code. That is the cma_regions_allocate() function:void cma_regions_allocate(int (*alloc)(struct cma_region *reg));It traverses list of all of the regions given on command line andreserves memory for them. The only argument is a callbackfunction used to reserve the region. Passing NULL as the argumentmakes the function use cma_region_alloc() function which usesbootmem for allocating.Alternatively, platform code could traverse the cma_regions arrayby itself but this should not be necessary.The If cma_region_alloc() allocator is used, thecma_regions_allocate() function needs to be allocated when bootmemis active.Platform has also a way of providing default cma and cma_mapparameters. cma_defaults() function is used for that purpose:int cma_defaults(const char *cma, const char *cma_map)It needs to be called after early params have been parsed butprior to allocating regions. Arguments of this function are usedonly if they are not-NULL and respective command line argument wasnot provided.** Future workIn the future, implementation of mechanisms that would allow thefree space inside the regions to be used as page cache, filesystembuffers or swap devices is planned. With such mechanisms, thememory would not be wasted when not used.Because all allocations and freeing of chunks pass the CMAframework it can follow what parts of the reserved memory arefreed and what parts are allocated. Tracking the unused memorywould let CMA use it for other purposes such as page cache, I/Obuffers, swap, etc.
7.后记
内存管理的分析先告一段落,后续可能还会针对某些模块进一步的研究与完善。
内存管理子系统,极其复杂,盘根错节,很容易就懵圈了,尽管费了不少心力,也只能说略知皮毛。
学习就像是爬山,面对一座高山,可能会有心理障碍,但是当你跨越之后,再看到同样高的山,心理上你将不再畏惧。
接下来将研究进程管理子系统
,将任督二脉打通。
未来会持续分析内核中的各类框架,并发机制等,敬请关注,一起探讨。
https://www.cnblogs.com/LoyenWang/p/12182594.html
8.推荐阅读
《搞懂Linux零拷贝,DMA》https://rtoax.blog.csdn.net/article/details/108825666
《ARM SMMU原理与IOMMU技术(“VT-d” DMA、I/O虚拟化、内存虚拟化)》
《直接内存访问 (Direct Memory Access, DMA)》
《[Linux Device Drivers, 2nd Edition] mmap和DMA》
《arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs》
《“ Aarch64 Linux内核内存管理”》
《“ CMA文档”》
《如何摆脱CMA》
《Aarch64 Kernel Memory Management》
《Linux内存管理:什么是CMA(contiguous memory allocation)?可与DMA结合使用》