当前位置: 移动技术网 > IT编程>开发语言>C/C++ > C++ 智能指针(shared_ptr/weak_ptr)源码分析

C++ 智能指针(shared_ptr/weak_ptr)源码分析

2018年11月05日  | 移动技术网IT编程  | 我要评论

单身离异,酷茨,周口卫校贴吧

c++11目前已经引入了unique_ptr, shared_ptr, weak_ptr等智能指针以及相关的模板类enable_shared_from_this等。被广泛使用的是shared_ptr, shared_pt具有c++中一般指针(build-in/raw)的特性,同时它可以管理用户用new创建的对象,可以说,shared_ptr实现了c++中的raii机制,让用户不用负责对象的内存回收,可以很方便的管理对象的生命周期,避免内存泄漏。一般的智能指针都定义为一个模板类,它的类型由被管理的对象类型初始化,内部包含了一个指向该对象的裸指针。

unique_ptr, shared_ptr, weak_ptr的特点如下:

unique_ptr独享被管理对象,同一时刻只能有一个unique_ptr拥有对象的所有权,当其被赋值时对象的所有权也发生转移,当其被销毁时对象也自动被销毁shared_ptr共享被管理对象,同一时刻可以有多个shared_ptr拥有对象的所有权,当最后一个shared_ptr对象销毁时,对象自动销毁weak_ptr不拥有对象的所有权,但是它可以判断对象是否存在和返回指向对象的shared_ptr指针;它的用途之一是了解决多个对象内部含有shared_ptr引起的循环指向导致对象无法释放的问题

那么c++中是怎么实现这些特性的呢,我们可以在gcc的目录(gcc-6.1.0\gcc-6.1.0\libstdc++-v3\include\tr1)中找到智能指针的一种实现,通过分析其源码找到答案;其它例如boost::shared_ptr等的实现也是类似的。gcc中相关智能指针的实现源码如下:

//  -*- c++ -*-

// copyright (c) 2007-2016 free software foundation, inc.
//
// this file is part of the gnu iso c++ library.  this library is free
// software; you can redistribute it and/or modify it under the
// terms of the gnu general public license as published by the
// free software foundation; either version 3, or (at your option)
// any later version.

// this library is distributed in the hope that it will be useful,
// but without any warranty; without even the implied warranty of
// merchantability or fitness for a particular purpose.  see the
// gnu general public license for more details.

// under section 7 of gpl version 3, you are granted additional
// permissions described in the gcc runtime library exception, version
// 3.1, as published by the free software foundation.

// you should have received a copy of the gnu general public license and
// a copy of the gcc runtime library exception along with this program;
// see the files copying3 and copying.runtime respectively.  if not, see
// .

//  shared_count.hpp
//  copyright (c) 2001, 2002, 2003 peter dimov and multi media ltd.

//  shared_ptr.hpp
//  copyright (c) 1998, 1999 greg colvin and beman dawes.
//  copyright (c) 2001, 2002, 2003 peter dimov

//  weak_ptr.hpp
//  copyright (c) 2001, 2002, 2003 peter dimov

//  enable_shared_from_this.hpp
//  copyright (c) 2002 peter dimov

// distributed under the boost software license, version 1.0. (see
// accompanying file license_1_0.txt or copy at
// https://www.boost.org/license_1_0.txt)

// gcc note:  based on version 1.32.0 of the boost library.

/** @file tr1/shared_ptr.h
 *  this is an internal header file, included by other library headers.
 *  do not attempt to use it directly. @headername{tr1/memory}
 */

#ifndef _tr1_shared_ptr_h
#define _tr1_shared_ptr_h 1

namespace std _glibcxx_visibility(default)
{
namespace tr1
{
_glibcxx_begin_namespace_version

 /**
   *  @brief  exception possibly thrown by @c shared_ptr.
   *  @ingroup exceptions
   */
  class bad_weak_ptr : public std::exception
  {
  public:
    virtual char const*
    what() const throw()
    { return "tr1::bad_weak_ptr"; }
  };

  // substitute for bad_weak_ptr object in the case of -fno-exceptions.
  inline void
  __throw_bad_weak_ptr()
  { _glibcxx_throw_or_abort(bad_weak_ptr()); }

  using __gnu_cxx::_lock_policy;
  using __gnu_cxx::__default_lock_policy;
  using __gnu_cxx::_s_single;
  using __gnu_cxx::_s_mutex;
  using __gnu_cxx::_s_atomic;

  // empty helper class except when the template argument is _s_mutex.
  template<_lock_policy _lp>
    class _mutex_base
    {
    protected:
      // the atomic policy uses fully-fenced builtins, single doesn't care.
      enum { _s_need_barriers = 0 };
    };

  template<>
    class _mutex_base<_s_mutex>
    : public __gnu_cxx::__mutex
    {
    protected:
      // this policy is used when atomic builtins are not available.
      // the replacement atomic operations might not have the necessary
      // memory barriers.
      enum { _s_need_barriers = 1 };
    };

  template<_lock_policy _lp = __default_lock_policy>
    class _sp_counted_base
    : public _mutex_base<_lp>
    {
    public:  
      _sp_counted_base()
      : _m_use_count(1), _m_weak_count(1) { }
      
      virtual
      ~_sp_counted_base() // nothrow 
      { }
  
      // called when _m_use_count drops to zero, to release the resources
      // managed by *this.
      virtual void
      _m_dispose() = 0; // nothrow
      
      // called when _m_weak_count drops to zero.
      virtual void
      _m_destroy() // nothrow
      { delete this; }
      
      virtual void*
      _m_get_deleter(const std::type_info&) = 0;

      void
      _m_add_ref_copy()
      { __gnu_cxx::__atomic_add_dispatch(&_m_use_count, 1); }
  
      void
      _m_add_ref_lock();
      
      void
      _m_release() // nothrow
      {
        // be race-detector-friendly.  for more info see bits/c++config.
        _glibcxx_synchronization_happens_before(&_m_use_count);
	if (__gnu_cxx::__exchange_and_add_dispatch(&_m_use_count, -1) == 1)
	  {
            _glibcxx_synchronization_happens_after(&_m_use_count);
	    _m_dispose();
	    // there must be a memory barrier between dispose() and destroy()
	    // to ensure that the effects of dispose() are observed in the
	    // thread that runs destroy().
	    // see https://gcc.gnu.org/ml/libstdc++/2005-11/msg00136.html
	    if (_mutex_base<_lp>::_s_need_barriers)
	      {
		__atomic_thread_fence (__atomic_acq_rel);
	      }

            // be race-detector-friendly.  for more info see bits/c++config.
            _glibcxx_synchronization_happens_before(&_m_weak_count);
	    if (__gnu_cxx::__exchange_and_add_dispatch(&_m_weak_count,
						       -1) == 1)
              {
                _glibcxx_synchronization_happens_after(&_m_weak_count);
	        _m_destroy();
              }
	  }
      }
  
      void
      _m_weak_add_ref() // nothrow
      { __gnu_cxx::__atomic_add_dispatch(&_m_weak_count, 1); }

      void
      _m_weak_release() // nothrow
      {
        // be race-detector-friendly. for more info see bits/c++config.
        _glibcxx_synchronization_happens_before(&_m_weak_count);
	if (__gnu_cxx::__exchange_and_add_dispatch(&_m_weak_count, -1) == 1)
	  {
            _glibcxx_synchronization_happens_after(&_m_weak_count);
	    if (_mutex_base<_lp>::_s_need_barriers)
	      {
	        // see _m_release(),
	        // destroy() must observe results of dispose()
		__atomic_thread_fence (__atomic_acq_rel);
	      }
	    _m_destroy();
	  }
      }
  
      long
      _m_get_use_count() const // nothrow
      {
        // no memory barrier is used here so there is no synchronization
        // with other threads.
        return const_cast(_m_use_count);
      }

    private:  
      _sp_counted_base(_sp_counted_base const&);
      _sp_counted_base& operator=(_sp_counted_base const&);

      _atomic_word  _m_use_count;     // #shared
      _atomic_word  _m_weak_count;    // #weak + (#shared != 0)
    };

  template<>
    inline void
    _sp_counted_base<_s_single>::
    _m_add_ref_lock()
    {
      if (__gnu_cxx::__exchange_and_add_dispatch(&_m_use_count, 1) == 0)
	{
	  _m_use_count = 0;
	  __throw_bad_weak_ptr();
	}
    }

  template<>
    inline void
    _sp_counted_base<_s_mutex>::
    _m_add_ref_lock()
    {
      __gnu_cxx::__scoped_lock sentry(*this);
      if (__gnu_cxx::__exchange_and_add_dispatch(&_m_use_count, 1) == 0)
	{
	  _m_use_count = 0;
	  __throw_bad_weak_ptr();
	}
    }

  template<> 
    inline void
    _sp_counted_base<_s_atomic>::
    _m_add_ref_lock()
    {
      // perform lock-free add-if-not-zero operation.
      _atomic_word __count = _m_use_count;
      do
	{
	  if (__count == 0)
	    __throw_bad_weak_ptr();
	  // replace the current counter value with the old value + 1, as
	  // long as it's not changed meanwhile. 
	}
      while (!__atomic_compare_exchange_n(&_m_use_count, &__count, __count + 1,
					  true, __atomic_acq_rel, 
					  __atomic_relaxed));
     }

  template
    class _sp_counted_base_impl
    : public _sp_counted_base<_lp>
    {
    public:
      // precondition: __d(__p) must not throw.
      _sp_counted_base_impl(_ptr __p, _deleter __d)
      : _m_ptr(__p), _m_del(__d) { }
    
      virtual void
      _m_dispose() // nothrow
      { _m_del(_m_ptr); }
      
      virtual void*
      _m_get_deleter(const std::type_info& __ti)
      {
#if __cpp_rtti
        return __ti == typeid(_deleter) ? &_m_del : 0;
#else
        return 0;
#endif
      }
      
    private:
      _sp_counted_base_impl(const _sp_counted_base_impl&);
      _sp_counted_base_impl& operator=(const _sp_counted_base_impl&);
      
      _ptr      _m_ptr;  // copy constructor must not throw
      _deleter  _m_del;  // copy constructor must not throw
    };

  template<_lock_policy _lp = __default_lock_policy>
    class __weak_count;

  template
    struct _sp_deleter
    {
      typedef void result_type;
      typedef _tp* argument_type;
      void operator()(_tp* __p) const { delete __p; }
    };

  template<_lock_policy _lp = __default_lock_policy>
    class __shared_count
    {
    public: 
      __shared_count()
      : _m_pi(0) // nothrow
      { }
  
      template
        __shared_count(_ptr __p) : _m_pi(0)
        {
	  __try
	    {
	      typedef typename std::tr1::remove_pointer<_ptr>::type _tp;
	      _m_pi = new _sp_counted_base_impl<_ptr, _sp_deleter<_tp>, _lp>(
	          __p, _sp_deleter<_tp>());
	    }
	  __catch(...)
	    {
	      delete __p;
	      __throw_exception_again;
	    }
	}

      template
        __shared_count(_ptr __p, _deleter __d) : _m_pi(0)
        {
	  __try
	    {
	      _m_pi = new _sp_counted_base_impl<_ptr, _deleter, _lp>(__p, __d);
	    }
	  __catch(...)
	    {
	      __d(__p); // call _deleter on __p.
	      __throw_exception_again;
	    }
	}

      // special case for auto_ptr<_tp> to provide the strong guarantee.
      template
        explicit
        __shared_count(std::auto_ptr<_tp>& __r)
	: _m_pi(new _sp_counted_base_impl<_tp*,
		_sp_deleter<_tp>, _lp >(__r.get(), _sp_deleter<_tp>()))
        { __r.release(); }

      // throw bad_weak_ptr when __r._m_get_use_count() == 0.
      explicit
      __shared_count(const __weak_count<_lp>& __r);
  
      ~__shared_count() // nothrow
      {
	if (_m_pi != 0)
	  _m_pi->_m_release();
      }
      
      __shared_count(const __shared_count& __r)
      : _m_pi(__r._m_pi) // nothrow
      {
	if (_m_pi != 0)
	  _m_pi->_m_add_ref_copy();
      }
  
      __shared_count&
      operator=(const __shared_count& __r) // nothrow
      {
	_sp_counted_base<_lp>* __tmp = __r._m_pi;
	if (__tmp != _m_pi)
	  {
	    if (__tmp != 0)
	      __tmp->_m_add_ref_copy();
	    if (_m_pi != 0)
	      _m_pi->_m_release();
	    _m_pi = __tmp;
	  }
	return *this;
      }
  
      void
      _m_swap(__shared_count& __r) // nothrow
      {
	_sp_counted_base<_lp>* __tmp = __r._m_pi;
	__r._m_pi = _m_pi;
	_m_pi = __tmp;
      }
  
      long
      _m_get_use_count() const // nothrow
      { return _m_pi != 0 ? _m_pi->_m_get_use_count() : 0; }

      bool
      _m_unique() const // nothrow
      { return this->_m_get_use_count() == 1; }
      
      friend inline bool
      operator==(const __shared_count& __a, const __shared_count& __b)
      { return __a._m_pi == __b._m_pi; }
  
      friend inline bool
      operator<(const __shared_count& __a, const __shared_count& __b)
      { return std::less<_sp_counted_base<_lp>*>()(__a._m_pi, __b._m_pi); }
  
      void*
      _m_get_deleter(const std::type_info& __ti) const
      { return _m_pi ? _m_pi->_m_get_deleter(__ti) : 0; }

    private:
      friend class __weak_count<_lp>;

      _sp_counted_base<_lp>*  _m_pi;
    };


  template<_lock_policy _lp>
    class __weak_count
    {
    public:
      __weak_count()
      : _m_pi(0) // nothrow
      { }
  
      __weak_count(const __shared_count<_lp>& __r)
      : _m_pi(__r._m_pi) // nothrow
      {
	if (_m_pi != 0)
	  _m_pi->_m_weak_add_ref();
      }
      
      __weak_count(const __weak_count<_lp>& __r)
      : _m_pi(__r._m_pi) // nothrow
      {
	if (_m_pi != 0)
	  _m_pi->_m_weak_add_ref();
      }
      
      ~__weak_count() // nothrow
      {
	if (_m_pi != 0)
	  _m_pi->_m_weak_release();
      }
      
      __weak_count<_lp>&
      operator=(const __shared_count<_lp>& __r) // nothrow
      {
	_sp_counted_base<_lp>* __tmp = __r._m_pi;
	if (__tmp != 0)
	  __tmp->_m_weak_add_ref();
	if (_m_pi != 0)
	  _m_pi->_m_weak_release();
	_m_pi = __tmp;  
	return *this;
      }
      
      __weak_count<_lp>&
      operator=(const __weak_count<_lp>& __r) // nothrow
      {
	_sp_counted_base<_lp>* __tmp = __r._m_pi;
	if (__tmp != 0)
	  __tmp->_m_weak_add_ref();
	if (_m_pi != 0)
	  _m_pi->_m_weak_release();
	_m_pi = __tmp;
	return *this;
      }

      void
      _m_swap(__weak_count<_lp>& __r) // nothrow
      {
	_sp_counted_base<_lp>* __tmp = __r._m_pi;
	__r._m_pi = _m_pi;
	_m_pi = __tmp;
      }
  
      long
      _m_get_use_count() const // nothrow
      { return _m_pi != 0 ? _m_pi->_m_get_use_count() : 0; }

      friend inline bool
      operator==(const __weak_count<_lp>& __a, const __weak_count<_lp>& __b)
      { return __a._m_pi == __b._m_pi; }
      
      friend inline bool
      operator<(const __weak_count<_lp>& __a, const __weak_count<_lp>& __b)
      { return std::less<_sp_counted_base<_lp>*>()(__a._m_pi, __b._m_pi); }

    private:
      friend class __shared_count<_lp>;

      _sp_counted_base<_lp>*  _m_pi;
    };

  // now that __weak_count is defined we can define this constructor:
  template<_lock_policy _lp>
    inline
    __shared_count<_lp>::
    __shared_count(const __weak_count<_lp>& __r)
    : _m_pi(__r._m_pi)
    {
      if (_m_pi != 0)
	_m_pi->_m_add_ref_lock();
      else
	__throw_bad_weak_ptr();
    }

  // forward declarations.
  template
    class __shared_ptr;
  
  template
    class __weak_ptr;

  template
    class __enable_shared_from_this;

  template
    class shared_ptr;
  
  template
    class weak_ptr;

  template
    class enable_shared_from_this;

  // support for enable_shared_from_this.

  // friend of __enable_shared_from_this.
  template<_lock_policy _lp, typename _tp1, typename _tp2>
    void
    __enable_shared_from_this_helper(const __shared_count<_lp>&,
				     const __enable_shared_from_this<_tp1,
				     _lp>*, const _tp2*);

  // friend of enable_shared_from_this.
  template
    void
    __enable_shared_from_this_helper(const __shared_count<>&,
				     const enable_shared_from_this<_tp1>*,
				     const _tp2*);

  template<_lock_policy _lp>
    inline void
    __enable_shared_from_this_helper(const __shared_count<_lp>&, ...)
    { }


  struct __static_cast_tag { };
  struct __const_cast_tag { };
  struct __dynamic_cast_tag { };

  // a smart pointer with reference-counted copy semantics.  the
  // object pointed to is deleted when the last shared_ptr pointing to
  // it is destroyed or reset.
  template
    class __shared_ptr
    {
    public:
      typedef _tp   element_type;
      
      __shared_ptr()
      : _m_ptr(0), _m_refcount() // never throws
      { }

      template
        explicit
        __shared_ptr(_tp1* __p)
	: _m_ptr(__p), _m_refcount(__p)
        {
	  __glibcxx_function_requires(_convertibleconcept<_tp1*, _tp*>)
	  typedef int _iscomplete[sizeof(_tp1)];
	  __enable_shared_from_this_helper(_m_refcount, __p, __p);
	}

      template
        __shared_ptr(_tp1* __p, _deleter __d)
        : _m_ptr(__p), _m_refcount(__p, __d)
        {
	  __glibcxx_function_requires(_convertibleconcept<_tp1*, _tp*>)
	  // todo requires _deleter copyconstructible and __d(__p) well-formed
	  __enable_shared_from_this_helper(_m_refcount, __p, __p);
	}
      
      //  generated copy constructor, assignment, destructor are fine.
      
      template
        __shared_ptr(const __shared_ptr<_tp1, _lp>& __r)
	: _m_ptr(__r._m_ptr), _m_refcount(__r._m_refcount) // never throws
        { __glibcxx_function_requires(_convertibleconcept<_tp1*, _tp*>) }

      template
        explicit
        __shared_ptr(const __weak_ptr<_tp1, _lp>& __r)
	: _m_refcount(__r._m_refcount) // may throw
        {
	  __glibcxx_function_requires(_convertibleconcept<_tp1*, _tp*>)
	  // it is now safe to copy __r._m_ptr, as _m_refcount(__r._m_refcount)
	  // did not throw.
	  _m_ptr = __r._m_ptr;
	}

#if (__cplusplus < 201103l) || _glibcxx_use_deprecated
      // postcondition: use_count() == 1 and __r.get() == 0
      template
        explicit
        __shared_ptr(std::auto_ptr<_tp1>& __r)
	: _m_ptr(__r.get()), _m_refcount()
        { // todo requries delete __r.release() well-formed
	  __glibcxx_function_requires(_convertibleconcept<_tp1*, _tp*>)
	  typedef int _iscomplete[sizeof(_tp1)];
	  _tp1* __tmp = __r.get();
	  _m_refcount = __shared_count<_lp>(__r);
	  __enable_shared_from_this_helper(_m_refcount, __tmp, __tmp);
	}

#endif

      template
        __shared_ptr(const __shared_ptr<_tp1, _lp>& __r, __static_cast_tag)
	: _m_ptr(static_cast(__r._m_ptr)),
	  _m_refcount(__r._m_refcount)
        { }

      template
        __shared_ptr(const __shared_ptr<_tp1, _lp>& __r, __const_cast_tag)
	: _m_ptr(const_cast(__r._m_ptr)),
	  _m_refcount(__r._m_refcount)
        { }

      template
        __shared_ptr(const __shared_ptr<_tp1, _lp>& __r, __dynamic_cast_tag)
	: _m_ptr(dynamic_cast(__r._m_ptr)),
	  _m_refcount(__r._m_refcount)
        {
	  if (_m_ptr == 0) // need to allocate new counter -- the cast failed
	    _m_refcount = __shared_count<_lp>();
	}

      template
        __shared_ptr&
        operator=(const __shared_ptr<_tp1, _lp>& __r) // never throws
        {
	  _m_ptr = __r._m_ptr;
	  _m_refcount = __r._m_refcount; // __shared_count::op= doesn't throw
	  return *this;
	}

#if (__cplusplus < 201103l) || _glibcxx_use_deprecated
      template
        __shared_ptr&
        operator=(std::auto_ptr<_tp1>& __r)
        {
	  __shared_ptr(__r).swap(*this);
	  return *this;
	}
#endif

      void
      reset() // never throws
      { __shared_ptr().swap(*this); }

      template
        void
        reset(_tp1* __p) // _tp1 must be complete.
        {
	  // catch self-reset errors.
	  _glibcxx_debug_assert(__p == 0 || __p != _m_ptr); 
	  __shared_ptr(__p).swap(*this);
	}

      template
        void
        reset(_tp1* __p, _deleter __d)
        { __shared_ptr(__p, __d).swap(*this); }

      // allow class instantiation when _tp is [cv-qual] void.
      typename std::tr1::add_reference<_tp>::type
      operator*() const // never throws
      {
	_glibcxx_debug_assert(_m_ptr != 0);
	return *_m_ptr;
      }

      _tp*
      operator->() const // never throws
      {
	_glibcxx_debug_assert(_m_ptr != 0);
	return _m_ptr;
      }
    
      _tp*
      get() const // never throws
      { return _m_ptr; }

      // implicit conversion to "bool"
    private:
      typedef _tp* __shared_ptr::*__unspecified_bool_type;

    public:
      operator __unspecified_bool_type() const // never throws
      { return _m_ptr == 0 ? 0 : &__shared_ptr::_m_ptr; }

      bool
      unique() const // never throws
      { return _m_refcount._m_unique(); }

      long
      use_count() const // never throws
      { return _m_refcount._m_get_use_count(); }

      void
      swap(__shared_ptr<_tp, _lp>& __other) // never throws
      {
	std::swap(_m_ptr, __other._m_ptr);
	_m_refcount._m_swap(__other._m_refcount);
      }

    private:
      void*
      _m_get_deleter(const std::type_info& __ti) const
      { return _m_refcount._m_get_deleter(__ti); }

      template
        bool
        _m_less(const __shared_ptr<_tp1, _lp1>& __rhs) const
        { return _m_refcount < __rhs._m_refcount; }

      template friend class __shared_ptr;
      template friend class __weak_ptr;

      template
        friend _del* get_deleter(const __shared_ptr<_tp1, _lp1>&);

      // friends injected into enclosing namespace and found by adl:
      template
        friend inline bool
        operator==(const __shared_ptr& __a, const __shared_ptr<_tp1, _lp>& __b)
        { return __a.get() == __b.get(); }

      template
        friend inline bool
        operator!=(const __shared_ptr& __a, const __shared_ptr<_tp1, _lp>& __b)
        { return __a.get() != __b.get(); }

      template
        friend inline bool
        operator<(const __shared_ptr& __a, const __shared_ptr<_tp1, _lp>& __b)
        { return __a._m_less(__b); }

      _tp*         	   _m_ptr;         // contained pointer.
      __shared_count<_lp>  _m_refcount;    // reference counter.
    };

  // 2.2.3.8 shared_ptr specialized algorithms.
  template
    inline void
    swap(__shared_ptr<_tp, _lp>& __a, __shared_ptr<_tp, _lp>& __b)
    { __a.swap(__b); }

  // 2.2.3.9 shared_ptr casts
  /*  the seemingly equivalent
   *           shared_ptr<_tp, _lp>(static_cast<_tp*>(__r.get()))
   *  will eventually result in undefined behaviour,
   *  attempting to delete the same object twice.
   */
  template
    inline __shared_ptr<_tp, _lp>
    static_pointer_cast(const __shared_ptr<_tp1, _lp>& __r)
    { return __shared_ptr<_tp, _lp>(__r, __static_cast_tag()); }

  /*  the seemingly equivalent
   *           shared_ptr<_tp, _lp>(const_cast<_tp*>(__r.get()))
   *  will eventually result in undefined behaviour,
   *  attempting to delete the same object twice.
   */
  template
    inline __shared_ptr<_tp, _lp>
    const_pointer_cast(const __shared_ptr<_tp1, _lp>& __r)
    { return __shared_ptr<_tp, _lp>(__r, __const_cast_tag()); }

  /*  the seemingly equivalent
   *           shared_ptr<_tp, _lp>(dynamic_cast<_tp*>(__r.get()))
   *  will eventually result in undefined behaviour,
   *  attempting to delete the same object twice.
   */
  template
    inline __shared_ptr<_tp, _lp>
    dynamic_pointer_cast(const __shared_ptr<_tp1, _lp>& __r)
    { return __shared_ptr<_tp, _lp>(__r, __dynamic_cast_tag()); }

  // 2.2.3.7 shared_ptr i/o
  template
    std::basic_ostream<_ch, _tr>&
    operator<<(std::basic_ostream<_ch, _tr>& __os, 
	       const __shared_ptr<_tp, _lp>& __p)
    {
      __os << __p.get();
      return __os;
    }

  // 2.2.3.10 shared_ptr get_deleter (experimental)
  template
    inline _del*
    get_deleter(const __shared_ptr<_tp, _lp>& __p)
    {
#if __cpp_rtti
      return static_cast<_del*>(__p._m_get_deleter(typeid(_del)));
#else
      return 0;
#endif
    }


  template
    class __weak_ptr
    {
    public:
      typedef _tp element_type;
      
      __weak_ptr()
      : _m_ptr(0), _m_refcount() // never throws
      { }

      // generated copy constructor, assignment, destructor are fine.
      
      // the "obvious" converting constructor implementation:
      //
      //  template
      //    __weak_ptr(const __weak_ptr<_tp1, _lp>& __r)
      //    : _m_ptr(__r._m_ptr), _m_refcount(__r._m_refcount) // never throws
      //    { }
      //
      // has a serious problem.
      //
      //  __r._m_ptr may already have been invalidated. the _m_ptr(__r._m_ptr)
      //  conversion may require access to *__r._m_ptr (virtual inheritance).
      //
      // it is not possible to avoid spurious access violations since
      // in multithreaded programs __r._m_ptr may be invalidated at any point.
      template
        __weak_ptr(const __weak_ptr<_tp1, _lp>& __r)
	: _m_refcount(__r._m_refcount) // never throws
        {
	  __glibcxx_function_requires(_convertibleconcept<_tp1*, _tp*>)
	  _m_ptr = __r.lock().get();
	}

      template
        __weak_ptr(const __shared_ptr<_tp1, _lp>& __r)
	: _m_ptr(__r._m_ptr), _m_refcount(__r._m_refcount) // never throws
        { __glibcxx_function_requires(_convertibleconcept<_tp1*, _tp*>) }

      template
        __weak_ptr&
        operator=(const __weak_ptr<_tp1, _lp>& __r) // never throws
        {
	  _m_ptr = __r.lock().get();
	  _m_refcount = __r._m_refcount;
	  return *this;
	}
      
      template
        __weak_ptr&
        operator=(const __shared_ptr<_tp1, _lp>& __r) // never throws
        {
	  _m_ptr = __r._m_ptr;
	  _m_refcount = __r._m_refcount;
	  return *this;
	}

      __shared_ptr<_tp, _lp>
      lock() const // never throws
      {
#ifdef __gthreads
	// optimization: avoid throw overhead.
	if (expired())
	  return __shared_ptr();

	__try
	  {
	    return __shared_ptr(*this);
	  }
	__catch(const bad_weak_ptr&)
	  {
	    // q: how can we get here?
	    // a: another thread may have invalidated r after the
	    //    use_count test above.
	    return __shared_ptr();
	  }
	
#else
	// optimization: avoid try/catch overhead when single threaded.
	return expired() ? __shared_ptr()
	                 : __shared_ptr(*this);

#endif
      } // xxx mt

      long
      use_count() const // never throws
      { return _m_refcount._m_get_use_count(); }

      bool
      expired() const // never throws
      { return _m_refcount._m_get_use_count() == 0; }
      
      void
      reset() // never throws
      { __weak_ptr().swap(*this); }

      void
      swap(__weak_ptr& __s) // never throws
      {
	std::swap(_m_ptr, __s._m_ptr);
	_m_refcount._m_swap(__s._m_refcount);
      }

    private:
      // used by __enable_shared_from_this.
      void
      _m_assign(_tp* __ptr, const __shared_count<_lp>& __refcount)
      {
	_m_ptr = __ptr;
	_m_refcount = __refcount;
      }

      template
        bool
        _m_less(const __weak_ptr<_tp1, _lp>& __rhs) const
        { return _m_refcount < __rhs._m_refcount; }

      template friend class __shared_ptr;
      template friend class __weak_ptr;
      friend class __enable_shared_from_this<_tp, _lp>;
      friend class enable_shared_from_this<_tp>;

      // friend injected into namespace and found by adl.
      template
        friend inline bool
        operator<(const __weak_ptr& __lhs, const __weak_ptr<_tp1, _lp>& __rhs)
        { return __lhs._m_less(__rhs); }

      _tp*       	 _m_ptr;         // contained pointer.
      __weak_count<_lp>  _m_refcount;    // reference counter.
    };

  // 2.2.4.7 weak_ptr specialized algorithms.
  template
    inline void
    swap(__weak_ptr<_tp, _lp>& __a, __weak_ptr<_tp, _lp>& __b)
    { __a.swap(__b); }


  template
    class __enable_shared_from_this
    {
    protected:
      __enable_shared_from_this() { }
      
      __enable_shared_from_this(const __enable_shared_from_this&) { }
      
      __enable_shared_from_this&
      operator=(const __enable_shared_from_this&)
      { return *this; }

      ~__enable_shared_from_this() { }
      
    public:
      __shared_ptr<_tp, _lp>
      shared_from_this()
      { return __shared_ptr<_tp, _lp>(this->_m_weak_this); }

      __shared_ptr
      shared_from_this() const
      { return __shared_ptr(this->_m_weak_this); }

    private:
      template
        void
        _m_weak_assign(_tp1* __p, const __shared_count<_lp>& __n) const
        { _m_weak_this._m_assign(__p, __n); }

      template
        friend void
        __enable_shared_from_this_helper(const __shared_count<_lp>& __pn,
					 const __enable_shared_from_this* __pe,
					 const _tp1* __px)
        {
	  if (__pe != 0)
	    __pe->_m_weak_assign(const_cast<_tp1*>(__px), __pn);
	}

      mutable __weak_ptr<_tp, _lp>  _m_weak_this;
    };


  // the actual shared_ptr, with forwarding constructors and
  // assignment operators.
  template
    class shared_ptr
    : public __shared_ptr<_tp>
    {
    public:
      shared_ptr()
      : __shared_ptr<_tp>() { }

      template
        explicit
        shared_ptr(_tp1* __p)
	: __shared_ptr<_tp>(__p) { }

      template
        shared_ptr(_tp1* __p, _deleter __d)
	: __shared_ptr<_tp>(__p, __d) { }

      template
        shared_ptr(const shared_ptr<_tp1>& __r)
	: __shared_ptr<_tp>(__r) { }

      template
        explicit
        shared_ptr(const weak_ptr<_tp1>& __r)
	: __shared_ptr<_tp>(__r) { }

#if (__cplusplus < 201103l) || _glibcxx_use_deprecated
      template
        explicit
        shared_ptr(std::auto_ptr<_tp1>& __r)
	: __shared_ptr<_tp>(__r) { }
#endif

      template
        shared_ptr(const shared_ptr<_tp1>& __r, __static_cast_tag)
	: __shared_ptr<_tp>(__r, __static_cast_tag()) { }

      template
        shared_ptr(const shared_ptr<_tp1>& __r, __const_cast_tag)
	: __shared_ptr<_tp>(__r, __const_cast_tag()) { }

      template
        shared_ptr(const shared_ptr<_tp1>& __r, __dynamic_cast_tag)
	: __shared_ptr<_tp>(__r, __dynamic_cast_tag()) { }

      template
        shared_ptr&
        operator=(const shared_ptr<_tp1>& __r) // never throws
        {
	  this->__shared_ptr<_tp>::operator=(__r);
	  return *this;
	}

#if (__cplusplus < 201103l) || _glibcxx_use_deprecated
      template
        shared_ptr&
        operator=(std::auto_ptr<_tp1>& __r)
        {
	  this->__shared_ptr<_tp>::operator=(__r);
	  return *this;
	}
#endif
    };

  // 2.2.3.8 shared_ptr specialized algorithms.
  template
    inline void
    swap(__shared_ptr<_tp>& __a, __shared_ptr<_tp>& __b)
    { __a.swap(__b); }

  template
    inline shared_ptr<_tp>
    static_pointer_cast(const shared_ptr<_tp1>& __r)
    { return shared_ptr<_tp>(__r, __static_cast_tag()); }

  template
    inline shared_ptr<_tp>
    const_pointer_cast(const shared_ptr<_tp1>& __r)
    { return shared_ptr<_tp>(__r, __const_cast_tag()); }

  template
    inline shared_ptr<_tp>
    dynamic_pointer_cast(const shared_ptr<_tp1>& __r)
    { return shared_ptr<_tp>(__r, __dynamic_cast_tag()); }


  // the actual weak_ptr, with forwarding constructors and
  // assignment operators.
  template
    class weak_ptr
    : public __weak_ptr<_tp>
    {
    public:
      weak_ptr()
      : __weak_ptr<_tp>() { }
      
      template
        weak_ptr(const weak_ptr<_tp1>& __r)
	: __weak_ptr<_tp>(__r) { }

      template
        weak_ptr(const shared_ptr<_tp1>& __r)
	: __weak_ptr<_tp>(__r) { }

      template
        weak_ptr&
        operator=(const weak_ptr<_tp1>& __r) // never throws
        {
	  this->__weak_ptr<_tp>::operator=(__r);
	  return *this;
	}

      template
        weak_ptr&
        operator=(const shared_ptr<_tp1>& __r) // never throws
        {
	  this->__weak_ptr<_tp>::operator=(__r);
	  return *this;
	}

      shared_ptr<_tp>
      lock() const // never throws
      {
#ifdef __gthreads
	if (this->expired())
	  return shared_ptr<_tp>();

	__try
	  {
	    return shared_ptr<_tp>(*this);
	  }
	__catch(const bad_weak_ptr&)
	  {
	    return shared_ptr<_tp>();
	  }
#else
	return this->expired() ? shared_ptr<_tp>()
	                       : shared_ptr<_tp>(*this);
#endif
      }
    };

  template
    class enable_shared_from_this
    {
    protected:
      enable_shared_from_this() { }
      
      enable_shared_from_this(const enable_shared_from_this&) { }

      enable_shared_from_this&
      operator=(const enable_shared_from_this&)
      { return *this; }

      ~enable_shared_from_this() { }

    public:
      shared_ptr<_tp>
      shared_from_this()
      { return shared_ptr<_tp>(this->_m_weak_this); }

      shared_ptr
      shared_from_this() const
      { return shared_ptr(this->_m_weak_this); }

    private:
      template
        void
        _m_weak_assign(_tp1* __p, const __shared_count<>& __n) const
        { _m_weak_this._m_assign(__p, __n); }

      template
        friend void
        __enable_shared_from_this_helper(const __shared_count<>& __pn,
					 const enable_shared_from_this* __pe,
					 const _tp1* __px)
        {
	  if (__pe != 0)
	    __pe->_m_weak_assign(const_cast<_tp1*>(__px), __pn);
	}

      mutable weak_ptr<_tp>  _m_weak_this;
    };

_glibcxx_end_namespace_version
}
}

#endif // _tr1_shared_ptr_h
其主要的类关系如下所示(省略相关的类模板参数):

 

\

从上面的类图我们可以很清楚的看出shared_ptr内部,含有一个指向被管理对象(managed object)t的指针以及一个__shared_count对象,__shared_count对象包含一个指向管理模块(manager object)的基类指针,管理模块(manager object)由具有原子属性的use_count和weak_count、指向被管理对象(managed object)t的指针、以及用来销毁被管理对象的deleter组成:

\

weak_ptr内部组成与shared_ptr类似,内部同样含有一个指向被管理对象t的指针以及一个__weak_count对象:

\

很明显,shared_ptr与weak_ptr的差异主要是由__shared_ptr与__weak_ptr体现出来的,而__shared_ptr与__weak_ptr的差异则主要是由__shared_count与__weak_count体现出来。

 

通过shared_ptr的构造函数,可以发现,在创建一个shared_ptr的时候需要一个new 操作符返回的被管理对象的地址来初始化shared_ptr, shared_ptr在内部会构建一个_shared_count对象,由_shared_count对象的构造函数可知,创建shared_ptr的时候也动态的创建了一个管理对象_sp_counted_base_impl:

 

template explicit __shared_ptr(_tp1* __p)
: _m_ptr(__p), _m_refcount(__p) {
    __glibcxx_function_requires(_convertibleconcept<_tp1*, _tp*>)
    typedef int _iscomplete[sizeof(_tp1)];
    __enable_shared_from_this_helper(_m_refcount, __p, __p);
}

template
__shared_count(_ptr __p) : _m_pi(0)
{
    __try
   {
	  typedef typename std::tr1::remove_pointer<_ptr>::type _tp;
	  _m_pi = new _sp_counted_base_impl<_ptr, _sp_deleter<_tp>, _lp>(__p, _sp_deleter<_tp>());
    }
    __catch(...)
    {
        delete __p;
	__throw_exception_again;
    }
}

 

由上面我们不难发现,shared_ptr内部包含一个指向被管理对象的指针_m_ptr,_sp_counted_base_impl内部也含有一个指向被管理对象的指针_m_ptr, 它们是不是重复多余了呢?实际上没有。这首先要从shared_ptr的拷贝构造或者赋值构造说起,当一个shared_ptr对象sp2是由sp1拷贝构或者赋值构造得来的时候,实际上构造完成后sp1内部的__shared_count对象包含的指向管理对象的指针与sp2内部的__shared_count对象包含的指向管理对象的指针是相等的,也就是说当多个shared_ptr对象来管理同一个对象时,它们共同使用同一个动态分配的管理对象。这可以从下面的__share_ptr的构造函数和__shared_count的构造函数清楚的看出。

 

 

template
 __shared_ptr(const __shared_ptr<_tp1, _lp>& __r)
 : _m_ptr(__r._m_ptr), _m_refcount(__r._m_refcount) // never throws
{__glibcxx_function_requires(_convertibleconcept<_tp1*, _tp*>)}


__shared_count&
operator=(const __shared_count& __r) // nothrow
{
    _sp_counted_base<_lp>* __tmp = __r._m_pi;
    if (__tmp != _m_pi)
    {
        if (__tmp != 0)
            __tmp->_m_add_ref_copy();
	if (_m_pi != 0)
	    _m_pi->_m_release();
	
        _m_pi = __tmp;
    }
}

 

上面说说当多个shared_ptr对象来管理同一个对象时,它们共同使用同一个动态分配的管理对象,为什么上面给出的_shared_count的构造函数中出现了__tmp != _m_pi的情形呢?这在sp2未初始化时(_m_pi为0,_r._m_pi非0)便是这样的情形。

 

更一般的,也可以考虑这样的情形:shared_ptr实例sp1开始指向类a的实例对象a1, 另外一个shared_ptr实例sp2指向类a的实例对象a2(a1 != a2),这样当把sp2赋值给sp1时便会出现上面的情形。假设初始时有且仅有一个sp1指向a1, 有且仅有一个sp2指向a2; 则赋值结束时sp1与sp2均指向a2, 没有指针指向a1, sp1指向的a1以及其对应的管理对象均应该被析构。这在上面的代码中我们可以很清楚的看到:因为__tmp != _m_pi, __tmp->_m_add_ref_copy()将会增加a2的use_count的引用计数;由于a1内部的_m_pi != 0, 将会调用其_m_release()函数:

 

//************_sp_counted_base*****************//
void
_m_add_ref_copy()
{ __gnu_cxx::__atomic_add_dispatch(&_m_use_count, 1); }


//************_sp_counted_base*****************//
void
_m_release() // nothrow
{
    // be race-detector-friendly.  for more info see bits/c++config.
    _glibcxx_synchronization_happens_before(&_m_use_count);
	if (__gnu_cxx::__exchange_and_add_dispatch(&_m_use_count, -1) == 1)
	{
            _glibcxx_synchronization_happens_after(&_m_use_count);
	    _m_dispose();
	    // there must be a memory barrier between dispose() and destroy()
	    // to ensure that the effects of dispose() are observed in the
	    // thread that runs destroy().
	    // see https://gcc.gnu.org/ml/libstdc++/2005-11/msg00136.html
	    if (_mutex_base<_lp>::_s_need_barriers)
	    {
		    __atomic_thread_fence (__atomic_acq_rel);
	    }

            // be race-detector-friendly.  for more info see bits/c++config.
            _glibcxx_synchronization_happens_before(&_m_weak_count);
	    if (__gnu_cxx::__exchange_and_add_dispatch(&_m_weak_count, -1) == 1)
            {
		_glibcxx_synchronization_happens_after(&_m_weak_count);
	        _m_destroy();
             }
	}
}

//************_sp_counted_base*****************//
// called when _m_use_count drops to zero, to release the resources
// managed by *this.
virtual void
_m_dispose() = 0; // nothrow

// called when _m_weak_count drops to zero.
virtual void
_m_destroy() // nothrow
{ delete this; }

//************_sp_counted_base_impl*************//
virtual void
_m_dispose() // nothrow
{ _m_del(_m_ptr); }

 

_m_release()函数首先对a1的use_count减去1,并对比减操作之前的值,如果减之前是1,说明减后是0,a1没有shared_ptr指针指向它了,应该将a1对象销毁,于是调用_m_dispose()函数销毁a1; 同时对a1的weak_count减去1,也对比减操作之前的值,如果减之前是1,说明减后是0,a1没有weak_ptr指向它了,应该将管理对象销毁,于是调用_m_destroy()销毁了管理对象。

 

从上面可以看出,use_count主要用来标记被管理对象的生命周期,weak_count主要用来标记管理对象的生命周期
当一个shared_ptr超出作用域被销毁时,它也会调用其_share_count的_m_release()对use_count和weak_count进行自减并判断是否需要释放资源:

 

~__shared_count() // nothrow
 {
	 if (_m_pi != 0)
	  _m_pi->_m_release();
 }

 

对于weak_ptr, 其对应的__weak_count的拷贝构造函数如下

//************_sp_counted_base*****************//
 void
 _m_weak_add_ref() // nothrow
{ __gnu_cxx::__atomic_add_dispatch(&_m_weak_count, 1); }

//************_sp_counted_base*****************//
void
_m_weak_release() // nothrow
{
    // be race-detector-friendly. for more info see bits/c++config.
    _glibcxx_synchronization_happens_before(&_m_weak_count);
    if (__gnu_cxx::__exchange_and_add_dispatch(&_m_weak_count, -1) == 1)
    {
        _glibcxx_synchronization_happens_after(&_m_weak_count);
	if (_mutex_base<_lp>::_s_need_barriers)
	{
	    // see _m_release(),
	    // destroy() must observe results of dispose()
            __atomic_thread_fence (__atomic_acq_rel);
	}
	_m_destroy();
    }
}
 
__weak_count<_lp>&
operator=(const __shared_count<_lp>& __r) // nothrow
{
    _sp_counted_base<_lp>* __tmp = __r._m_pi;
    if (__tmp != 0)
        __tmp->_m_weak_add_ref();
  
    if (_m_pi != 0)
        _m_pi->_m_weak_release();
  
    _m_pi = __tmp;  
	
    return *this;
}
      
__weak_count<_lp>&
operator=(const __weak_count<_lp>& __r) // nothrow
{
    _sp_counted_base<_lp>* __tmp = __r._m_pi;
    if (__tmp != 0)
        __tmp->_m_weak_add_ref();
    if (_m_pi != 0)
        _m_pi->_m_weak_release();
    _m_pi = __tmp;
	
    return *this;
}

~__weak_count() // nothrow
{
    if (_m_pi != 0)
        _m_pi->_m_weak_release();
}

 

从上面可以看出,__weak_count相关的赋值拷贝以及析构函数均只会影响到weak_count的值,当weak_count为0时,释放管理对象。

如对本文有疑问,请在下面进行留言讨论,广大热心网友会与你互动!! 点击进行留言回复

相关文章:

验证码:
移动技术网