当前位置: 代码迷 >> Android >> Android IPC 通信机制源码分析
  详细解决方案

Android IPC 通信机制源码分析

热度:360   发布时间:2016-05-01 16:49:41.0
Android IPC 通讯机制源码分析
Binder通信简介:
   
Linux系统中进程间通信的方式有:socket, named pipe,message queque, signal,share
memory。Java系统中的进程间通信方式有socket, named
pipe等,android应用程序理所当然可以应用JAVA的IPC机制实现进程间的通信,但我查看android的源码,在同一终端上的应用软件的通信几乎看不到这些IPC通信方式,取而代之的是Binder通信。Google为什么要采用这种方式呢,这取决于Binder通信方式的高效率。
Binder通信是通过linux的binder driver来实现的,Binder通信操作类似线程迁移(thread
migration),两个进程间IPC看起来就象是一个进程进入另一个进程执行代码然后带着执行的结果返回。Binder的用户空间为每一个进程维护着一个可用的线程池,线程池用于处理到来的IPC以及执行进程本地消息,Binder通信是同步而不是异步。
   
Android中的Binder通信是基于Service与Client的,所有需要IBinder通信的进程都必须创建一个IBinder接口,系统中有一个进程管理所有的system
service,Android不允许用户添加非授权的System service,当然现在源码开发了,我们可以修改一些代码来实现添加底层system
Service的目的。对用户程序来说,我们也要创建server,或者Service用于进程间通信,这里有一个ActivityManagerService管理JAVA应用层所有的service创建与连接(connect),disconnect,所有的Activity也是通过这个service来启动,加载的。ActivityManagerService也是加载在Systems
Servcie中的。
    Android虚拟机启动之前系统会先启动service Manager进程,service
Manager打开binder驱动,并通知binder kernel驱动程序这个进程将作为System Service
Manager,然后该进程将进入一个循环,等待处理来自其他进程的数据。用户创建一个System
service后,通过defaultServiceManager得到一个远程ServiceManager的接口,通过这个接口我们可以调用addService函数将System
service添加到Service
Manager进程中,然后client可以通过getService获取到需要连接的目的Service的IBinder对象,这个IBinder是Service的BBinder在binder
kernel的一个参考,所以service IBinder 在binder
kernel中不会存在相同的两个IBinder对象,每一个Client进程同样需要打开Binder驱动程序。对用户程序而言,我们获得这个对象就可以通过binder
kernel访问service对象中的方法。Client与Service在不同的进程中,通过这种方式实现了类似线程间的迁移的通信方式,对用户程序而言当调用Service返回的IBinder接口后,访问Service中的方法就如同调用自己的函数。
下图为client与Service建立连接的示意图












首先从ServiceManager注册过程来逐步分析上述过程是如何实现的。


ServiceMananger进程注册过程源码分析:
Service
Manager Process(Service_manager.c):
   
Service_manager为其他进程的Service提供管理,这个服务程序必须在Android Runtime起来之前运行,否则Android JAVA
Vm ActivityManagerService无法注册。
int main(int argc, char **argv)
{
   
struct binder_state *bs;
    void *svcmgr = BINDER_SERVICE_MANAGER;


   
bs = binder_open(128*1024); //打开/dev/binder 驱动


   
if (binder_become_context_manager(bs)) {//注册为service manager in binder
kernel
        LOGE("cannot become context manager (%s)\n",
strerror(errno));
        return -1;
    }
    svcmgr_handle =
svcmgr;
    binder_loop(bs, svcmgr_handler);
    return
0;
}
首先打开binder的驱动程序然后通过binder_become_context_manager函数调用ioctl告诉Binder
Kernel驱动程序这是一个服务管理进程,然后调用binder_loop等待来自其他进程的数据。BINDER_SERVICE_MANAGER是服务管理进程的句柄,它的定义是:
/*
the one magic object */
#define BINDER_SERVICE_MANAGER ((void*)
0)
如果客户端进程获取Service时所使用的句柄与此不符,Service
Manager将不接受Client的请求。客户端如何设置这个句柄在下面会介绍。


CameraSerivce服务的注册(Main_mediaservice.c)
int
main(int argc, char** argv)
{
    sp
proc(ProcessState::self());
    sp sm =
defaultServiceManager();
    LOGI("ServiceManager: %p", sm.get());
   
AudioFlinger::instantiate();             //Audio 服务
   
MediaPlayerService::instantiate();       //mediaPlayer服务
   
CameraService::instantiate();             //Camera 服务
   
ProcessState::self()->startThreadPool(); //为进程开启缓冲池
   
IPCThreadState::self()->joinThreadPool(); //将进程加入到缓冲池
}


CameraService.cpp
void
CameraService::instantiate() {
   
defaultServiceManager()->addService(
            String16("media.camera"),
new CameraService());
}
创建CameraService服务对象并添加到ServiceManager进程中。



client获取remote
IServiceManager IBinder接口:
sp
defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return
gDefaultServiceManager;
   
    {
        AutoMutex
_l(gDefaultServiceManagerLock);
        if (gDefaultServiceManager == NULL)
{
            gDefaultServiceManager =
interface_cast(
               
ProcessState::self()->getContextObject(NULL));
        }
    }
   
return
gDefaultServiceManager;
}
任何一个进程在第一次调用defaultServiceManager的时候gDefaultServiceManager值为Null,所以该进程会通过ProcessState::self得到ProcessState实例。ProcessState将打开Binder驱动。
ProcessState.cpp
sp
ProcessState::self()
{
    if (gProcess != NULL) return
gProcess;
   
    AutoMutex _l(gProcessMutex);
    if (gProcess ==
NULL) gProcess = new ProcessState;
    return gProcess;
}


ProcessState::ProcessState()
:
mDriverFD(open_driver())
//打开/dev/binder驱动
...........................
{
}


sp
ProcessState::getContextObject(const sp& caller)
{
   
if (supportsProcesses()) {
        return getStrongProxyForHandle(0);
   
} else {
        return getContextObject(String16("default"), caller);
   
}
}
Android是支持Binder驱动的所以程序会调用getStrongProxyForHandle。这里handle为0,正好与Service_manager中的BINDER_SERVICE_MANAGER一致。
sp
ProcessState::getStrongProxyForHandle(int32_t handle)
{
   
sp result;
    AutoMutex _l(mLock);
    handle_entry* e =
lookupHandleLocked(handle);


   
if (e != NULL) {
        // We need to create a new BpBinder if there isn't
currently one, OR we
        // are unable to acquire a weak reference on
this current one. See comment
        // in getWeakProxyForHandle() for more
info about this.
        IBinder* b = e->binder;
//第一次调用该函数b为Null
        if (b == NULL ||
!e->refs->attemptIncWeak(this)) {
            b = new
BpBinder(handle);
            e->binder = b;
            if (b)
e->refs = b->getWeakRefs();
            result = b;
        } else
{
            // This little bit of nastyness is to allow us to add a
primary
            // reference to the remote proxy when this team doesn't
have one
            // but another team is sending the handle to
us.
            result.force_set(b);
           
e->refs->decWeak(this);
        }
    }
    return
result;
}
第一次调用的时候b为Null所以会为b生成一BpBinder对象:
BpBinder::BpBinder(int32_t
handle)
    : mHandle(handle)
    , mAlive(1)
    ,
mObitsSent(0)
    , mObituaries(NULL)
{
    LOGV("Creating BpBinder %p
handle %d\n", this, mHandle);


   
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
   
IPCThreadState::self()->incWeakHandle(handle);
}


void
IPCThreadState::incWeakHandle(int32_t handle)
{
   
LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);
   
mOut.writeInt32(BC_INCREFS);
   
mOut.writeInt32(handle);
}
getContextObject返回了一个BpBinder对象。
interface_cast(
               
ProcessState::self()->getContextObject(NULL));


template
inline sp interface_cast(const
sp& obj)
{
    return
INTERFACE::asInterface(obj);
}
将这个宏扩展后最终得到的是:
sp
IServiceManager::asInterface(const sp& obj)    
   
{                                                                   
       
sp
intr;                                          
        if (obj != NULL)
{                                              
            intr =
static_cast(                          
               
obj->queryLocalInterface(                               
                       
IServiceManager::descriptor).get());               
            if (intr ==
NULL) {                                         
                intr = new
BpServiceManager(obj);                          
           
}                                                           
       
}                                                               
       
return intr;
}
返回一个BpServiceManager对象,这里obj就是前面我们创建的BpBInder对象。


client获取Service的远程IBinder接口
以CameraService为例(camera.cpp):
const
sp& Camera::getCameraService()
{
   
Mutex::Autolock _l(mLock);
    if (mCameraService.get() == 0) {
       
sp sm = defaultServiceManager();
       
sp binder;
        do {
            binder =
sm->getService(String16("media.camera"));
            if (binder !=
0)
                break;
            LOGW("CameraService not published,
waiting...");
            usleep(500000); // 0.5 s
        }
while(true);
        if (mDeathNotifier == NULL) {
           
mDeathNotifier = new DeathNotifier();
        }
       
binder->linkToDeath(mDeathNotifier);
        mCameraService =
interface_cast(binder);
    }
   
LOGE_IF(mCameraService==0, "no CameraService!?");
    return
mCameraService;
}
由前面的分析可知sm是BpCameraService对象://应该为BpServiceManager对象
    virtual
sp getService(const String16& name) const
    {
       
unsigned n;
        for (n = 0; n  svc = checkService(name);
            if (svc != NULL)
return svc;
            LOGI("Waiting for sevice %s...\n",
String8(name).string());
            sleep(1);
        }
        return
NULL;
    }
    virtual sp checkService( const
String16& name) const
    {
        Parcel data, reply;
       
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
       
data.writeString16(name);
       
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
       
return
reply.readStrongBinder();
}
这里的remote就是我们前面得到BpBinder对象。所以checkService将调用BpBinder中的transact函数:
status_t
BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply,
uint32_t flags)
{
    // Once a binder has died, it will never come back
to life.
    if (mAlive) {
        status_t status =
IPCThreadState::self()->transact(
            mHandle, code, data, reply,
flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return
status;
    }
    return
DEAD_OBJECT;
}
mHandle为0,BpBinder继续往下调用IPCThreadState:transact函数将数据发给与mHandle相关联的Service
Manager Process。
status_t IPCThreadState::transact(int32_t
handle,
                                  uint32_t code, const Parcel&
data,
                                  Parcel* reply, uint32_t
flags)
{
  
............................................................
    if (err ==
NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s",
getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY"
: "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags,
handle, code, data, NULL);
    }
   
    if (err != NO_ERROR)
{
        if (reply) reply->setError(err);
        return (mLastError =
err);
    }
   
    if ((flags & TF_ONE_WAY) == 0) {
        if
(reply) {
            err = waitForResponse(reply);
        } else
{
            Parcel fakeReply;
            err =
waitForResponse(&fakeReply);
        }
      
..............................
   
    return err;
}


通过writeTransactionData构造要发送的数据
status_t
IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
   
int32_t handle, uint32_t code, const Parcel& data, status_t*
statusBuffer)
{
    binder_transaction_data tr;


   
tr.target.handle = handle; //这个handle将传递到service_manager
    tr.code =
code;
    tr.flags =
bindrFlags;
。。。。。。。。。。。。。。
}
waitForResponse将调用talkWithDriver与对Binder
kernel进行读写操作。当Binder
kernel接收到数据后,service_mananger线程的ThreadPool就会启动,service_manager查找到CameraService服务后调用binder_send_reply,将返回的数据写入Binder
kernel,Binder kernel。
status_t IPCThreadState::waitForResponse(Parcel *reply,
status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;


   
while (1) {
        if ((err=talkWithDriver()) mDriverFD,
BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
       
else
            err = -errno;
#else
        err =
INVALID_OPERATION;
#endif
...................................................
}
通过上面的ioctl系统函数中BINDER_WRITE_READ对binder
kernel进行读写。




Client
A与Binder kernel通信:
kernel\drivers\android\Binder.c)
static int
binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc
*proc;
if
(binder_debug_mask & BINDER_DEBUG_OPEN_CLOSE)
   printk(KERN_INFO
"binder_open: %d:%d\n", current->group_leader->pid, current->pid);
proc
= kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
   return
-ENOMEM;
get_task_struct(current);
proc->tsk = current;        
//保存打开/dev/binder驱动的当前进程任务数据结构
INIT_LIST_HEAD(&proc->todo);
init_waitqueue_head(&proc->wait);
proc->default_priority
=
task_nice(current);
mutex_lock(&binder_lock);
binder_stats.obj_created[BINDER_STAT_PROC]++;
hlist_add_head(&proc->proc_node,
&binder_procs);
proc->pid =
current->group_leader->pid;
INIT_LIST_HEAD(&proc->delivered_death);
filp->private_data
= proc;
mutex_unlock(&binder_lock);
if
(binder_proc_dir_entry_proc) {
   char strbuf[11];
   snprintf(strbuf,
sizeof(strbuf), "%u", proc->pid);
   create_proc_read_entry(strbuf,
S_IRUGO, binder_proc_dir_entry_proc, binder_read_proc_proc, proc);
//为当前进程创建一个process入口结构信息
}
return
0;
}
从这里可以知道每一个打开/dev/binder的进程的信息都保存在binder kernel中,因而当一个进程调用ioctl与kernel
binder通信时,binder kernel就能查询到调用进程的信息。BINDER_WRITE_READ是调用ioctl进程与Binder
kernel通信一个非常重要的command。大家可以看到在IPCThreadState中的transact函数这个函数中call
talkWithDriver发送的command就是BINDER_WRITE_READ。
static long binder_ioctl(struct
file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct
binder_proc *proc = filp->private_data;
struct binder_thread
*thread;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void
__user *)arg;
/*printk(KERN_INFO
"binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd,
arg);*/
    //将调用ioctl的进程挂起 caller将挂起直到 service 返回
ret =
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size,
bwr.read_buffer);
   if (bwr.write_size > 0) {
    ret =
binder_thread_write(proc, thread, (void __user *)bwr.write_buffer,
bwr.write_size, &bwr.write_consumed);
    if (ret  0) {//数据写入到caller process。
    ret =
binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size,
&bwr.read_consumed, filp->f_flags & O_NONBLOCK);
    if
(!list_empty(&proc->todo))
    
wake_up_interruptible(&proc->wait); //恢复挂起的caller进程
    if (ret return_error == BR_OK) {
   if
(get_user(cmd, (uint32_t __user *)ptr))//从user空间获取cmd数据到内核空间
    return
-EFAULT;
   ptr += sizeof(uint32_t);
   if (_IOC_NR(cmd) stats.bc[_IOC_NR(cmd)]++;
   
thread->stats.bc[_IOC_NR(cmd)]++;
   }
   switch (cmd) {
   case
BC_INCREFS:
.........................................
        case
BC_TRANSACTION: //IPCThreadState通过writeTransactionData设置该cmd
   case
BC_REPLY: {
    struct binder_transaction_data tr;
   
if (copy_from_user(&tr, ptr, sizeof(tr)))
     return -EFAULT;
    ptr
+= sizeof(tr);
    binder_transaction(proc, thread, &tr, cmd ==
BC_REPLY);
    break;
  
}
........................................

static
void
binder_transaction(struct binder_proc *proc, struct binder_thread
*thread,
struct binder_transaction_data *tr, int reply)
{
    
..............................................
    if (reply) // cmd !=
BC_REPLY 不走这个case
{
        ......................................
  
}
   else
{
   if (tr->target.handle) {
//对于service_manager来说这个条件不满足(handle == 0)
   
.......................................
    }
   } else
{//这一段我们获取到了service_mananger process 注册在binder kernle的进程信息
target_node =
binder_context_mgr_node; //BINDER_SET_CONTEXT_MGR 注册了service
    if
(target_node == NULL) {             //manager
     return_error =
BR_DEAD_REPLY;
     goto err_no_context_mgr_node;
    }
   }
  
e->to_node = target_node->debug_id;
   target_proc =
target_node->proc; //得到目标进程service_mananger 的结构
   if (target_proc ==
NULL) {
    return_error = BR_DEAD_REPLY;
    goto err_dead_binder;
  
}
   ....................
}
if (target_thread) {
   e->to_thread
= target_thread->pid;
   target_list = &target_thread->todo;
  
target_wait = &target_thread->wait; //得到service manager挂起的线程
} else
{
   target_list = &target_proc->todo;
   target_wait =
&target_proc->wait;
}
............................................
case
BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
   
..........................
    ref = binder_get_ref_for_node(target_proc,
node); //在Binder kernel中创建
   
..........................                        //查找到的service参考
   }
break;
............................................
if
(target_wait)
      wake_up_interruptible(target_wait);   //唤醒挂起的线程 处理caller
process请求
............................................//处理命令可以看svcmgr_handler
}
  
到这里我们已经通过getService连接到service manager进程了,service
manager进程得到请求后,如果他的状态是挂起的话,将被唤醒。现在我们来看一下service
manager中的binder_loop函数。
Service_manager.c
void binder_loop(struct
binder_state *bs, binder_handler func)
{
   
.................................
    binder_write(bs, readbuf,
sizeof(unsigned));
   
for (;;) {
        bwr.read_size = sizeof(readbuf);
       
bwr.read_consumed = 0;
        bwr.read_buffer = (unsigned) readbuf;
   
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
//如果没有要处理的请求进程将挂起
        if (res data, res);//将找到的service返回给caller
           
}
            ptr += sizeof(*txn) / sizeof(uint32_t);
           
break;
         ........
        }
}
void
binder_send_reply(struct binder_state *bs,
                       struct
binder_io *reply,
                       void
*buffer_to_free,
                       int status)
{
    struct
{
        uint32_t cmd_free;
        void *buffer;
        uint32_t
cmd_reply;
        struct binder_txn txn;
    } __attribute__((packed))
data;
   
data.cmd_free = BC_FREE_BUFFER;
    data.buffer =
buffer_to_free;
data.cmd_reply = BC_REPLY;
//将我们前面binder_thread_write中cmd替换为BC_REPLY就可以知
data.txn.target = 0;      
//道service manager如何将找到的service返回给caller了
  
..........................
    binder_write(bs, &data, sizeof(data));
//调用ioctl与binder
kernel通信
}
从这里走出去后,caller该被唤醒了,client进程就得到了所请求的service的IBinder对象在Binder
kernel中的参考,这是一个远程BBinder对象。
连接建立后的client连接Service的通信过程:
virtual
sp connect(const sp&
cameraClient)
    {
        Parcel data, reply;
       
data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
       
data.writeStrongBinder(cameraClient->asBinder());
       
remote()->transact(BnCameraService::CONNECT, data, &reply);
       
return interface_cast(reply.readStrongBinder());
   
}
向前面分析的这里remote是我们得到的CameraService的对象,caller进程会切入到CameraService。android的每一个进程都会创建一个线程池,这个线程池用处理其他进程的请求。当没有数据的时候线程是挂起的,这时binder
kernel唤醒了这个线程:
IPCThreadState::joinThreadPool(bool isMain)
{
   
LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n",
(void*)pthread_self(), getpid());
   
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
   
   
status_t result;
    do {
        int32_t cmd;
        result =
talkWithDriver();
        if (result >= NO_ERROR) {
            size_t
IN = mIn.dataAvail();   //binder kernel传递数据到service
            if (IN (tr.data.ptr.buffer),
               
tr.data_size,
                reinterpret_cast(tr.data.ptr.offsets),
               
tr.offsets_size/sizeof(size_t), freeBuffer,
this);
           
            const pid_t origPid =
mCallingPid;
            const uid_t origUid =
mCallingUid;
           
            mCallingPid =
tr.sender_pid;
            mCallingUid =
tr.sender_euid;
           
            //LOGI(">>>> TRANSACT
from pid %d uid %d\n", mCallingPid, mCallingUid);
           
           
Parcel reply;
            .........................
            if
(tr.target.ptr) {
      sp b((BBinder*)tr.cookie);
//service中Binder对象即CameraService
       const status_t error =
b->transact(tr.code, buffer, &reply, 0);//将调用
   if (error transact(tr.code, buffer, &reply,
0);
                if (error writeInt32(pingBinder());
            break;
       
default:
            err = onTransact(code, data, reply,
flags);
            break;
    }
    ...................
    return
err;
}
将调用CameraService的onTransact函数,CameraService继承了BBinder。
status_t
BnCameraService::onTransact(
    uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
    switch(code) {
        case
CONNECT: {
            CHECK_INTERFACE(ICameraService, data,
reply);
            sp cameraClient =
interface_cast(data.readStrongBinder());
           
sp camera = connect(cameraClient); //真正的处理函数
           
reply->writeStrongBinder(camera->asBinder());
            return
NO_ERROR;
        } break;
        default:
            return
BBinder::onTransact(code, data, reply, flags);
   
}
}
至此完成了一次从client到service的通信。

设计一个多客户端的Service
Service可以连接不同的Client,这里说的多客户端是指在Service中为不同的client创建不同的IClient接口,如果看过AIDL编程的话,应该清楚,Service需要开放一个IService接口给客户端,我们通过defaultServiceManager->getService就可以得到相应的service一个BpBinder接口,通过这个接口调用transact函数就可以与service通信了,这样也就完成了一个简单的service与client程序了,但这里有个缺点就是,这个IService是对所有的client开放的,如果我们要对不同的client做区分的话,在建立连接的时候所有的client需要给Service一个特性,这样做也未尝不可,但会很麻烦。比如对Camera来说可能不止一个摄像头,摄像头的功能也不一样,这样做就比较麻烦了。其实我们完全可以参照QT中多客户端的设计方式,在Service中为每一个Client都创建一个IClient接口,IService接口只用于Serivce与Client建立连接用。对于Camera,如果存在多摄像头我们就可以在Service中为不同的Client打开不同的设备。
import
android.os.IBinder;
import android.os.RemoteException;
public class
TestServerServer extends android.app.testServer.ITestServer.Stub
{
int
mClientCount = 0;
testServerClient mClient[];
@Override
public
android.app.testServer.ITestClient.Stub connect(ITestClient client) throws
RemoteException
{
   // TODO Auto-generated method
stub
testServerClient tClient = new testServerClient(this, client);
//为Client创建
   mClient[mClientCount] = tClient;               
//不同的IClient
   mClientCount ++;
   System.out.printf("*** Server connect
client is %d", client.asBinder());
   return tClient;
}
@Override
public
void receivedData(int count) throws RemoteException
{
   // TODO
Auto-generated method stub
 
}
Public static class testServerClient
extends android.app.testServer.ITestClient.Stub
{
   public
android.app.testServer.ITestClient mClient;
   public TestServerServer
mServer;
   public testServerClient(TestServerServer tServer,
android.app.testServer.ITestClient tClient)
   {
    mServer =
tServer;
    mClient = tClient;
   }
   public IBinder asBinder()
  
{
    // TODO Auto-generated method stub
    return this;
  
}
}
}
这仅仅是个Service的demo而已,如果添加这个作为system Service还得改一下android代码avoid
permission check!
总结:
   
假定一个Client A 进程与Service B 进程要建立IPC通信,通过前面的分析我们知道他的流程如下:
1:Service B 打开Binder
driver, 将自己的进程信息注册到kernel并为Service创建一个binder_ref。
2:Service B 通过Add_Service
将Service信息添加到service_manager进程
3:Service B 的Thread pool 挂起 等待client
的请求
4:Client A 调用open_driver打开Binder driver
将自己的进程信息注册到kernel并为Service创建一个binder_ref
5: Client A
调用defaultManagerService.getService 得到Service B在kernel中的IBinder对象
6:通过transact
与Binder kernel 通信,Binder Kernel将Client A 挂起。
7:Binder Kernel恢复Service B
thread pool线程,并在 joinThreadPool 中处理Client的请求
8: Binder Kernel 挂起Service B
并将Service B 返回的数据写到Client A
9:Binder Kernle 恢复Client A
Binder kernel
driver在Client A 与Service B之间扮演着中间代理的角色。任何通过transact传递的IBinder对象都会在Binder
kernel中创建一个与此相关联的独一无二的BInder对象,用于区分不同的Client。




  相关解决方案