当前位置: 代码迷 >> Android >> android binder 机制2(client和普通server)
  详细解决方案

android binder 机制2(client和普通server)

热度:109   发布时间:2016-04-28 04:49:11.0
android binder 机制二(client和普通server)

在讲它们之间的通信之前,我们先以MediaServer为例看看普通Server进程都在干些什么。

int main(){	……	// 获得ProcessState实例		sp<ProcessState> proc(ProcessState::self());		// 得到ServiceManager的Binder客户端实例        sp<IServiceManager> sm = defaultServiceManager();		……		// 通过ServiceManager的Binder客户端向系统注册MediaPlayer服务        MediaPlayerService::instantiate();        ……		// start run        ProcessState::self()->startThreadPool();        IPCThreadState::self()->joinThreadPool();}

defaultServiceManager()在上一篇中已经有介绍。

MediaPlayerService::instantiate()的实现如下,就是addService到ServiceManager,和上一篇的getService类似,故不作介绍。

void MediaPlayerService::instantiate() {    defaultServiceManager()->addService(            String16("media.player"), new MediaPlayerService());}
接下来看ProcessState::self()->startThreadPool()的实现

void ProcessState::startThreadPool(){    AutoMutex _l(mLock);    if (!mThreadPoolStarted) {        mThreadPoolStarted = true;        spawnPooledThread(true);    }}void ProcessState::spawnPooledThread(bool isMain){    if (mThreadPoolStarted) {        String8 name = makeBinderThreadName();        ALOGV("Spawning new pooled thread, name=%s\n", name.string());        sp<Thread> t = new PoolThread(isMain);        t->run(name.string());    }}
class PoolThread : public Thread{public:    PoolThread(bool isMain)        : mIsMain(isMain)    {    }    protected:    virtual bool threadLoop()    {        IPCThreadState::self()->joinThreadPool(mIsMain);        return false;    }        const bool mIsMain;};
实际上,这个函数不过是创建了一个新的线程,然后在线程中又创建了一个IPCThreadState,并调用了joinThreadPool函数。

void IPCThreadState::joinThreadPool(bool isMain){    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);        set_sched_policy(mMyThreadId, SP_FOREGROUND);            status_t result;    do {        processPendingDerefs();        // now get the next command to be processed, waiting if necessary        result = getAndExecuteCommand();        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",                  mProcess->mDriverFD, result);            abort();        }                // Let this thread exit the thread pool if it is no longer        // needed and it is not the main process thread.        if(result == TIMED_OUT && !isMain) {            break;        }    } while (result != -ECONNREFUSED && result != -EBADF);	    mOut.writeInt32(BC_EXIT_LOOPER);    talkWithDriver(false);}
status_t IPCThreadState::getAndExecuteCommand(){    status_t result;    int32_t cmd;    result = talkWithDriver();    if (result >= NO_ERROR) {        size_t IN = mIn.dataAvail();        if (IN < sizeof(int32_t)) return result;        cmd = mIn.readInt32();        result = executeCommand(cmd);        set_sched_policy(mMyThreadId, SP_FOREGROUND);    }    return result;}

我们可以看到,主线程和新创建的线程,都在做一件事,talkWithDriver读取Binder驱动,然后就executeCommand处理请求。这就是普通Server进程启动后一直都在干的事:等待客户端请求,处理请求,然后返回给客户端。

既然Server进程已经准备就绪了,那么现在就应该要轮到Client端闪亮登场,Client端将通过Binder来请求Server做一些事情。接下来看代码:

status_t MediaPlayer::setDataSource(int fd, int64_t offset, int64_t length){    status_t err = UNKNOWN_ERROR;    const sp<IMediaPlayerService>& service(getMediaPlayerService());    if (service != 0) {        sp<IMediaPlayer> player(service->create(this, mAudioSessionId));        if ((NO_ERROR != doSetRetransmitEndpoint(player)) ||            (NO_ERROR != player->setDataSource(fd, offset, length))) {            player.clear();        }        err = attachNewPlayer(player);    }    return err;}

getMediaPlayerService()之前分析过,返回一个BpMediaPlayerService,这里问一个问题:为什么这个BpMediaPlayerService就能和MediaPlayerService进程进行Binder通信,而不是和别的什么Server进程?

再回顾一下代码:

/*static*/const sp<IMediaPlayerService>&IMediaDeathNotifier::getMediaPlayerService(){    Mutex::Autolock _l(sServiceLock);    if (sMediaPlayerService == 0) {        sp<IServiceManager> sm = defaultServiceManager();        sp<IBinder> binder;        do {            binder = sm->getService(String16("media.player"));            if (binder != 0) {                break;            }            ALOGW("Media player service not published, waiting...");            usleep(500000); // 0.5 s        } while (true);        if (sDeathNotifier == NULL) {            sDeathNotifier = new DeathNotifier();        }        binder->linkToDeath(sDeathNotifier);        sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);    }    ALOGE_IF(sMediaPlayerService == 0, "no media player service!?");    return sMediaPlayerService;}
答案应该在binder = sm->getService(String16("media.player"))这句话里面,返回值binder将会作为BpMediaPlayerService构造函数的参数。下面来看getService

virtual sp<IBinder> getService(const String16& name) const    {        unsigned n;        for (n = 0; n < 5; n++){            sp<IBinder> svc = checkService(name);            if (svc != NULL) return svc;            ALOGI("Waiting for service %s...\n", String8(name).string());            sleep(1);        }        return NULL;    }    virtual sp<IBinder> checkService( const String16& name) const    {        Parcel data, reply;        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());        data.writeString16(name);        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);        return reply.readStrongBinder();    }sp<IBinder> Parcel::readStrongBinder() const{    sp<IBinder> val;    unflatten_binder(ProcessState::self(), *this, &val);    return val;}status_t unflatten_binder(const sp<ProcessState>& proc,    const Parcel& in, sp<IBinder>* out){    const flat_binder_object* flat = in.readObject(false);        if (flat) {        switch (flat->type) {            case BINDER_TYPE_BINDER:                *out = static_cast<IBinder*>(flat->cookie);                return finish_unflatten_binder(NULL, *flat, in);            case BINDER_TYPE_HANDLE:                *out = proc->getStrongProxyForHandle(flat->handle);                return finish_unflatten_binder(                    static_cast<BpBinder*>(out->get()), *flat, in);        }            }    return BAD_TYPE;}

unflatten_binder函数中,flat->type和flat->handle在ServiceManager中被赋值,其中flat->type的值是BINDER_TYPE_HANDLE,flat->handle的值是所查询Service对应的Handle值,中间过程涉及到binder底层驱动的代码,不在这里阐述。

这样binder = sm->getService(String16("media.player"))执行后,binder=BpBinder(Handle),其中Handle的值为所查询Service对应的Handle值,这样,client端和service端之间的通信就建立了。

分析完getMediaPlayerService(),建立了通信的通路,接下来正式进入通信。

sp<IMediaPlayer>player(service->create(this, mAudioSessionId));

进入IMediaPlayerService.cpp看看create的实现。

virtual sp<IMediaPlayer> create(            const sp<IMediaPlayerClient>& client, int audioSessionId) {        Parcel data, reply;        data.writeInterfaceToken(IMediaPlayerService::getInterfaceDescriptor());        data.writeStrongBinder(client->asBinder());        data.writeInt32(audioSessionId);        remote()->transact(CREATE, data, &reply);        return interface_cast<IMediaPlayer>(reply.readStrongBinder());    }
经过之前的分析,我们可以很容易的指导remote()返回的是BpBinder(handle), transact(CREATE,data, &reply)将数据写入到Binder驱动,并唤醒Service进程,接下来我们就来看Server将作何反应。现在我们已经知道,Server进程一直都在读取Binder驱动,然后executeCommand,下面就直接看executeCommand的实现。

status_t IPCThreadState::executeCommand(int32_t cmd){    BBinder* obj;    RefBase::weakref_type* refs;    status_t result = NO_ERROR;        switch (cmd) {    ……    case BR_TRANSACTION:        {            binder_transaction_data tr;            result = mIn.read(&tr, sizeof(tr));            ALOG_ASSERT(result == NO_ERROR,                "Not enough command data for brTRANSACTION");            if (result != NO_ERROR) break;                        Parcel buffer;            buffer.ipcSetDataReference(                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                tr.data_size,                reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                tr.offsets_size/sizeof(size_t), freeBuffer, this);                        const pid_t origPid = mCallingPid;            const uid_t origUid = mCallingUid;                        mCallingPid = tr.sender_pid;            mCallingUid = tr.sender_euid;                        int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);            if (gDisableBackgroundScheduling) {                if (curPrio > ANDROID_PRIORITY_NORMAL) {                    // We have inherited a reduced priority from the caller, but do not                    // want to run in that state in this process.  The driver set our                    // priority already (though not our scheduling class), so bounce                    // it back to the default before invoking the transaction.                    setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);                }            } else {                if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {                    // We want to use the inherited priority from the caller.                    // Ensure this thread is in the background scheduling class,                    // since the driver won't modify scheduling classes for us.                    // The scheduling group is reset to default by the caller                    // once this method returns after the transaction is complete.                    set_sched_policy(mMyThreadId, SP_BACKGROUND);                }            }            //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);                        Parcel reply;            IF_LOG_TRANSACTIONS() {                TextOutput::Bundle _b(alog);                alog << "BR_TRANSACTION thr " << (void*)pthread_self()                    << " / obj " << tr.target.ptr << " / code "                    << TypeCode(tr.code) << ": " << indent << buffer                    << dedent << endl                    << "Data addr = "                    << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer)                    << ", offsets addr="                    << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl;            }            if (tr.target.ptr) {                sp<BBinder> b((BBinder*)tr.cookie);                const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);                if (error < NO_ERROR) reply.setError(error);            } else {                const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);                if (error < NO_ERROR) reply.setError(error);            }                        //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",            //     mCallingPid, origPid, origUid);                        if ((tr.flags & TF_ONE_WAY) == 0) {                LOG_ONEWAY("Sending reply to %d!", mCallingPid);                sendReply(reply, 0);            } else {                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);            }                        mCallingPid = origPid;            mCallingUid = origUid;            IF_LOG_TRANSACTIONS() {                TextOutput::Bundle _b(alog);                alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "                    << tr.target.ptr << ": " << indent << reply << dedent << endl;            }                    }        break;    ……    default:        printf("*** BAD COMMAND %d received from Binder driver\n", cmd);        result = UNKNOWN_ERROR;        break;    }    if (result != NO_ERROR) {        mLastError = result;    }        return result;}

看这里:

if (tr.target.ptr) {sp<BBinder> b((BBinder*)tr.cookie);const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);if (error < NO_ERROR) reply.setError(error);} else {const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);if (error < NO_ERROR) reply.setError(error);}
这里的b实际上就是我们在addService时创建的MediaPlayerService对象,经过包括Binder驱动在内的传输和转换,就成这么个数据类型了。

void MediaPlayerService::instantiate() {    defaultServiceManager()->addService(            String16("media.player"), new MediaPlayerService());}

看下面的继承关系,

classMediaPlayerService : public BnMediaPlayerService

MediaPlayerService本身没有实现transact方法,因此,b->transact(tr.code,buffer, &reply, tr.flags)是调用了BnMediaPlayerService的transact方法。

进入IMediaPlayerService.cpp中找到BnMediaPlayerService的transact方法,如下:

status_t BnMediaPlayerService::onTransact(    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){    switch (code) {        case CREATE: {            CHECK_INTERFACE(IMediaPlayerService, data, reply);            sp<IMediaPlayerClient> client =                interface_cast<IMediaPlayerClient>(data.readStrongBinder());            int audioSessionId = data.readInt32();            sp<IMediaPlayer> player = create(client, audioSessionId);            reply->writeStrongBinder(player->asBinder());            return NO_ERROR;        } break;	……}
sp<IMediaPlayer> player = create(client,audioSessionId)这里调用的create方法在MediaPlayerService类中实现,进入MediaPlayerService.cpp中:

sp<IMediaPlayer> MediaPlayerService::create(const sp<IMediaPlayerClient>& client,        int audioSessionId){    pid_t pid = IPCThreadState::self()->getCallingPid();    int32_t connId = android_atomic_inc(&mNextConnId);    sp<Client> c = new Client(            this, pid, connId, client, audioSessionId,            IPCThreadState::self()->getCallingUid());    ALOGV("Create new client(%d) from pid %d, uid %d, ", connId, pid,         IPCThreadState::self()->getCallingUid());    /* add by Gary. start {{----------------------------------- */    c->setScreen(mScreen);    /* add by Gary. end   -----------------------------------}} */    c->setSubGate(mGlobalSubGate);  // 2012-03-12, add the global interfaces to control the subtitle gate    wp<Client> w = c;    {        Mutex::Autolock lock(mLock);        mClients.add(w);    }    return c;}
到这里为止,Server处理完了事务,接下来将处理结果返回给client,看这里:

if ((tr.flags & TF_ONE_WAY) == 0) {LOG_ONEWAY("Sending reply to %d!", mCallingPid);sendReply(reply, 0);} else {LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);}status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags){    status_t err;    status_t statusBuffer;    err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);    if (err < NO_ERROR) return err;        return waitForResponse(NULL, NULL);}
调用sendReply将结果写回Binder驱动,从而得以返回client进程。通信完成。

  相关解决方案