当前位置: 移动技术网 > IT编程>数据库>Oracle > impdp导入表不创建segments

impdp导入表不创建segments

2018年11月30日  | 移动技术网IT编程  | 我要评论

一、文档说明

   1.0 需求:需要将生产环境picc用户导出,在测试环境中docker 测试数据库导入,只需要表结构;

   2.0 思路:根据开发人员提供的需求,数据库源端aix 11.2.0.4, 目标端11.2.0.3

                  工具:使用expdp进行导出,impdp进行导入,scp进行传输

                            expdp导出,开并行,只导出元数据,排除job等存储过程对象

                            scp服务器相关权限端口,账户密码索取

                            impdp导入前,存储空间,用户权限提前准备

  

 

二、操作记录

 --第一次导入操作failed

--失败原因,导入表创建语法ddl失败,原因

1.导入环境为xe学习环境,数据库容量最大11g,如果空间足够,不会报错

2.导入建表语法,dump文件小于200m,导入建表语句还未插入数据,已占用11g存储,由于表的storage(initial 65536 next 1048576)参数导致,对每个段都分配了空间,导致创建空表占用11g达到xe阀值报错

3 .测试11g新特效,延迟段创建对数据泵导入的对象无效,此参照只能影响手工新建的对象

 

1)资料获取
数据泵include and exclude对象类型视图
export/import datapump parameters include and exclude - how to load and unload specific objects (doc id 341733.1)­­
~database_export_objects  /schema_export_objects  /table_export_objects  

2)进行导出
sql> create directory dump as '/home/oracle/tools';
expdp \'/ as sysdba\' directory=dump dumpfile=system%u.dmp logfile=system.log schemas=system exclude=sequence,trigger,ref_constraint,constraint,procdepobj content=metadata_only cluster=n parallel

--导出元数据,加了并行,但实质只导出一个dump文件,说明导出元数据只能串行导出
--附上导出日志,如果愿意,后缀的导出类型,均可过滤,保留表、索引即可,最小化原则处理需求
starting "sys"."sys_export_schema_02": "/******** as sysdba" directory=dump dumpfile=system%u.dmp logfile=system.log schemas=system

exclude=sequence,trigger,ref_constraint,constraint,procdepobj content=metadata_only cluster=n parallel=2 
processing object type schema_export/user
processing object type schema_export/system_grant
processing object type schema_export/role_grant
processing object type schema_export/default_role
processing object type schema_export/pre_schema/procact_schema
processing object type schema_export/synonym/synonym
processing object type schema_export/type/type_spec
processing object type schema_export/table/table
processing object type schema_export/table/pre_table_action
processing object type schema_export/table/grant/owner_grant/object_grant
processing object type schema_export/table/comment
processing object type schema_export/package/package_spec
processing object type schema_export/function/function
processing object type schema_export/procedure/procedure
processing object type schema_export/package/compile_package/package_spec/alter_package_spec
processing object type schema_export/function/alter_function
processing object type schema_export/procedure/alter_procedure
processing object type schema_export/table/index/index
processing object type schema_export/table/index/statistics/index_statistics
processing object type schema_export/view/view
processing object type schema_export/view/grant/owner_grant/object_grant
processing object type schema_export/view/comment
processing object type schema_export/package/package_body
processing object type schema_export/table/statistics/table_statistics
processing object type schema_export/table/post_table_action
processing object type schema_export/post_schema/procact_schema
master table "sys"."sys_export_schema_02" successfully loaded/unloaded
******************************************************************************
dump file set for sys.sys_export_schema_02 is:
/home/oracle/tools/system01.dmp
job "sys"."sys_export_schema_02" successfully completed at thu nov 29 14:48:14 2018 elapsed 0 00:00

3)scp 省略

4) 导入操作

--docker 命令进行sql*plus 
docker run -d --name "oradb" -v /home/dmuser/my_oracle_data:/u01/app/oracle ip:10082/sath89/oracle-xe-11g
--正常环境,登陆服务器端,即可使用impdp工具,本次场景为docker 有些特殊


--创建表空间
--源端用户所在表空间
sql> select tablespace_name from dba_segments where owner='system' group by tablespace_name;

tablespace_name
------------------------------
sysaux
system--目标端创建表空间

crate tablesapce system datafile '/u01/app/oracle/oradata/xe/system.dbf' size  10m uniform size 128k autoextend on next 30m maxsize 10g; 
sqlplus as sysdba
create directories abc as 'xxx';


--导入
 impdp \'/ as sysdba\' dumpfile=piccprod%u.dmp directory=dump logfile=picc.log remap_tablespace=lifelob_data:lifedata_t_l,lifeindex_t_l:lifedata_t_l
 

--导入报错 
 ora-01658: unable to create initial extent for segment in tablespace system
--对system表空间进行扩容

--再次导入报错
 create table "piccprod"."t_policy_check_item" ("item_id" number(10,0) not null enable, "customer_id" number(10,0)
 not null enable, "check_num" number(4,0) not null enable, "normal" char(1 byte) default 'y' not null enable,
 "notes" varchar2(1000 byte), "charge_fee" number(10,2) default 0 not null enable) pctfree 10 pctused 40 in
ora-39171: job is experiencing a resumable wait.
ora-12953: the request exceeds the maximum allowed database size of 11 gb
 
--搜索mos 提示,学习xe类 express edition (简化版本,数据库允许最大11g)
ora-12953: the request exceeds the maximum allowed database size of 11 gb (doc id 2414879.1)    
[release 11.2 to 12.2]
this is an oracle database xe environment, and oracle database xe has a limitation of 11 gb of user data
 疑问?测试库怎么有11g这么大?

 sql> select round( sum(bytes)/1024/1024) m from dba_segments where owner not in('piccprod');
         m
----------
      1708
--导入用户后 >11g出发xe峰值,报错
sql> select sum(bytes)/1024/1024 from dba_segments;
sum(bytes)/1024/1024
--------------------
          11706.8125
--什么类型的对象占用空间
sql> select sum(bytes/1024/1024) ,segment_type from dba_segments where owner='piccprod' group by owner,segment_type;
sum(bytes/1024/1024) segment_type
-------------------- ------------------
                9999 table

?dump文件大小182,导入后,表大小9g                
oracle@c46bf408bfad:~/dmpfile$ ls
piccprod01.dmp  picc.log
oracle@c46bf408bfad:~/dmpfile$ du -sm piccprod01.dmp 
182     piccprod01.dmpfile

查询表数据:
select segment_name,segment_type,bytes/1024/1024 from dba_segments where owner='piccprod' and segment_type='table' and rownum=1;
segment_name                   segment_type       bytes/1024/1024
------------------------------ ------------------ ---------------
t_diagnosis_type               table                           .5

select count(*) from piccprod.t_diagnosis_type;
  count(*)
----------
         0
查询表的区的分布
sql> select tablespace_name,segment_name,extent_id,block_id,bytes/1024,blocks from dba_extents where owner='piccprod' and segment_name='t_diagnosis_type';
tablespace_name segment_name          extent_id   block_id bytes/1024     blocks
--------------- -------------------- ---------- ---------- ---------- ----------
lifedata_t_l    t_diagnosis_type              0        128        128         16
lifedata_t_l    t_diagnosis_type              1        144        128         16
lifedata_t_l    t_diagnosis_type              2        160        128         16
lifedata_t_l    t_diagnosis_type              3        176        128         16



尝试对表move(本次不再次对数据文件resize 进行分析,如果需要resize减少数据文件,需要从数据文件max_block_id 进行move,随后对数据文件进行resize)
alter table piccprod.t_diagnosis_type move tablespace lifedata_t_l;
                    *
error at line 1:
ora-12953: the request exceeds the maximum allowed database size of 11 gb
*move时,需要存在一倍对象空闲空间,此时空闲空间不足无法move*
*释放空间,找出max最大对象
select segment_name,sum(blocks) from dba_extents where owner='piccprod' group by segment_name order by 2;
segment_name        sum(blocks)
-------------------------------------------
t_policy_auto        221312
t_policy_product     309120
drop table piccprod.t_policy_auto purge;
drop table piccprod.t_policy_product purge;

alter table piccprod.t_diagnosis_type move tablespace lifedata_t_l;

select tablespace_name,segment_name,extent_id,block_id,bytes/1024,blocks from dba_extents where owner='piccprod' and segment_name='t_diagnosis_type';
tablespace_name      segment_name          extent_id   block_id bytes/1024     blocks
-------------------- -------------------- ---------- ---------- ---------- ----------
lifedata_t_l         t_diagnosis_type              0      34720        128         16
lifedata_t_l         t_diagnosis_type              1      34736        128         16
lifedata_t_l         t_diagnosis_type              2      34752        128         16
lifedata_t_l         t_diagnosis_type              3      34768        128         16

*尝试对数据文件进行resize*
select file_id,bytes/1024/1024 from dba_data_files where tablespace_name='lifedata_t_l';
   file_id bytes/1024/1024
---------- ---------------
         5           10000
alter database datafile 5 resize 5000m;
*
error at line 1:
ora-03297: file contains used data beyond requested resize value
--虽然数据为空,但是表对应的段,分配了存储,占用了存储

*truncate table
sql> truncate table piccprod.t_diagnosis_type;
table truncated.

sql> select tablespace_name,segment_name,extent_id,block_id,bytes/1024,blocks from dba_extents where owner='piccprod' and segment_name='t_diagnosis_type';
tablespace_name      segment_name          extent_id   block_id bytes/1024     blocks
-------------------- -------------------- ---------- ---------- ---------- ----------
lifedata_t_l         t_diagnosis_type              0      34720        128         16
lifedata_t_l         t_diagnosis_type              1      34736        128         16
lifedata_t_l         t_diagnosis_type              2      34752        128         16
lifedata_t_l         t_diagnosis_type              3      34768        128         16


*数据库参数,延迟段创建
sql> show parameter defer
name                                 type        value
------------------------------------ ----------- ------------------------------
deferred_segment_creation            boolean     true
--参数并未关闭--false,但是之前的导入还是创建表,因此延迟段创建参数,对于数据泵导入的对象是不起作用的


*查询建表语句
--导入
 impdp \'/ as sysdba\' dumpfile=piccprod%u.dmp directory=dump logfile=picc%u.log sqlfile=sql01.sql
 ora-39002: invalid operation
ora-31694: master table "sys"."sys_sql_file_full_01" failed to load/unload
ora-02354: error in exporting/importing data
ora-39776: fatal direct path api error loading table "sys"."sys_sql_file_full_01"
ora-12953: the request exceeds the maximum allowed database size of 11 gb
--
 create table "piccprod"."t_policy_check_item" ("item_id" number(10,0) not null enable,
"customer_id" number(10,0) not null enable, "check_num" number(4,0) not null enable,
 "normal" char(1 byte) default 'y' not null enable, "notes" varchar2(1000 byte), 
"charge_fee" number(10,2) default 0 not null enable) pctfree 10 pctused 40 in
ora-39171: job is experiencing a resumable wait.

ora-12953: the request exceeds the maximum allowed database size of 11 gb

为何impdp,提取ddl语句也报错,因为数据泵导入会在数据库内,创建相关对象,占用临时存储,数据库都满了,因此即使不导入数据,临时创建个对象都是报错的

 

 

 

 

  --第二次操作,successful

 总结:第一次导入失败,主要原因是,数据泵导入后,分配段初始化空间,避免导入分配第一个extent,即避免现象

 impdp \'/ as sysdba\' dumpfile=piccprod%u.dmp directory=dump logfile=picc.log remap_tablespace=lifelob_data:lifedata_t_l,lifeindex_t_l:lifedata_t_l
transform = segment_attributes:n

 

 

 

三、问题反思
transform = segment_attributes:n 参数作用何在,有什么特性
实验测试:  
1.创建一个测试表a, 创建主键,创建一个索引
2.对对象进行元数据导出
3.实验参数,进行导入
4.查询及验证表及索引是否存在
5.查询表及索引的段是否分配
6.impdp sqlfile 查询与表 and  索引 get ddl 差异

 



1.创建一个测试表a, 创建主键,创建一个索引
sql> create table yang.ceshi as select * from scott.emp;
sql> alter table yang.ceshi add constraint pk_id primary key(empno);
sql> create index yang.id2 on  yang.ceshi(deptno);
          
2.对对象进行元数据导出
expdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang.log schemas=yang include=table:"in('ceshi')" content=metadata_only

3.实验参数,进行导入
impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang1.log transform = segment_attributes:n remap_schema=yang:scott

4.查询及验证表及索引是否存在
select object_name,object_type,status from user_objects where created>sysdate-1

object_name object_type status
-------------------- ------------------- -------
id2 index valid
pk_id index valid
ceshi table valid

5.查询表及索引的段是否分配
sql> select segment_name,segment_type,blocks,extents,initial_extent,next_extent from user_segments where segment_name in(select object_name from user_objects where created>sysdate-1);

no rows selected

 6.impdp sqlfile 查询与表 and  索引 get ddl 差异
[oracle@t2 tools]$ impdp \'/ as sysdba\' directory=dump dumpfile=yang%u.dmp logfile=yang1.log transform = segment_attributes:n remap_schema=yang:scott sqlfile=test_table.sql

--查询创建语法,进行说明:  
第一部分 alter session level trace文件
第二部分,创建表语法,没有了storage语法
第三部分,创建索引,对索引取消并行,在真实导入数据过程中,parallel并行能帮助快速创建索引,创建索引后数据泵自动noparallel
第四部分,导入完毕后,对数据泵临时创建对象进行清理
[oracle@t2 tools]$ more test_table.sql 
-- connect sys 
alter session set events '22830 trace name context forever, level 192 ';
-- new object type path: schema_export/table/table
create table "scott"."ceshi" 
( "empno" number(4,0), 
"ename" varchar2(10 byte), 
"job" varchar2(9 byte), 
"mgr" number(4,0), 
"hiredate" date, 
"sal" number(7,2), 
"comm" number(7,2), 
"deptno" number(2,0)
) ;
-- new object type path: schema_export/table/index/index
-- connect scott
create unique index "scott"."pk_id" on "scott"."ceshi" ("empno") ;

alter index "scott"."pk_id" noparallel;
create index "scott"."id2" on "scott"."ceshi" ("deptno") 
;

alter index "scott"."id2" noparallel;
-- new object type path: schema_export/table/constraint/constraint
-- connect sys
alter table "scott"."ceshi" add constraint "pk_id" primary key ("empno")
using index "scott"."pk_id" enable;
-- new object type path: schema_export/table/index/statistics/index_statistics
declare i_n varchar2(60); 
i_o varchar2(60); 
nv varchar2(1); 
c dbms_metadata.t_var_coll; 
df varchar2(21) := 'yyyy-mm-dd:hh24:mi:ss'; 
stmt varchar2(300) := ' insert into "sys"."impdp_stats" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) values (''i'',6,:1,:
2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,null,:14,:15,null,:16,:17)';
begin
delete from "sys"."impdp_stats"; 
i_n := 'pk_id'; 
i_o := 'scott';


execute immediate stmt using 2,i_n,nv,nv,i_o,14,1,14,1,1,1,0,14,nv,nv,to_date('2018-11-29 15:17:22',df),nv;

dbms_stats.import_index_stats('"' || i_o || '"','"' || i_n || '"',null,'"impdp_stats"',null,'"sys"'); 
delete from "sys"."impdp_stats"; 
end; 
/
declare i_n varchar2(60); 
i_o varchar2(60); 
nv varchar2(1); 
c dbms_metadata.t_var_coll; 
df varchar2(21) := 'yyyy-mm-dd:hh24:mi:ss'; 
stmt varchar2(300) := ' insert into "sys"."impdp_stats" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) values (''i'',6,:1,:
2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,null,:14,:15,null,:16,:17)';
begin
delete from "sys"."impdp_stats"; 
i_n := 'id2'; 
i_o := 'scott'; 
execute immediate stmt using 2,i_n,nv,nv,i_o,14,1,3,1,1,1,0,14,nv,nv,to_date('2018-11-29 15:18:08',df),nv;

dbms_stats.import_index_stats('"' || i_o || '"','"' || i_n || '"',null,'"impdp_stats"',null,'"sys"'); 
delete from "sys"."impdp_stats"; 
end; 
/


---查询表的ddl语法
select dbms_metadata.get_ddl('table','ceshi','scott') ddl_text from dual

ddl_text
--------------------------------------------------------------------------------

create table "scott"."ceshi"
( "empno" number(4,0),
"ename" varchar2(10),
"job" varchar2(9),
"mgr" number(4,0),
"hiredate" date,
"sal" number(7,2),
"comm" number(7,2),
"deptno" number(2,0),
constraint "pk_id" primary key ("empno")
using index pctfree 10 initrans 2 maxtrans 255 compute statistics
tablespace "users" enable
) segment creation deferred
pctfree 10 pctused 40 initrans 1 maxtrans 255
nocompress logging
tablespace "users"

--表不存在storage属性后

insert 数据进行测试

sql> insert into scott.ceshi select * from scott.emp;

--查询表及索引的extent分配,发现表其实并没有extent storage属性,是怎么分配的呢?

sql> select segment_name,segment_type,blocks,extents,initial_extent/1024 initial_extent_k,next_extent/1024 next_extent_k from 
user_segments where segment_name in(select object_name from user_objects where created>sysdate-1);
segment_name segment_type blocks extents initial_extent_k next_extent_k
--------------- ------------------ ---------- ---------- ---------------- -------------
             ceshi                        table                8           1                          64                                1024
              pk_id                         index               8           1                          64                                 1024
                id2                             index              8           1                           64                                 1024

--查询表空间属性

sql> select tablespace_name from user_segments where segment_name in(select object_name from user_objects where created>sysdate-1) group by tablespace_name;

tablespace_name
------------------------------
users

sql> select tablespace_name,block_size,initial_extent/1024 initial_extent_k,next_extent/1024 next_extent_k,segment_space_management,allocation_type,extent_management from dba_tablespaces where tablespace_name='users';

tablespace block_size initial_extent_k next_extent_k segmen allocatio extent_man
---------- ---------- ---------------- ------------- ------ --------- ----------
users 8192 64 auto system local  --表空间本地管理表空间,系统自动扩展分配, 8k,1m,8m,64m

--可见,对于表来说,如果未配置storage属性,则表使用表空间的默认storage属性分配extents 大小

 

 


  

四、问题总结
1.根据问题现象,寻找解决方法,路不止一条,在能接受的范围内解决即可
2.根据问题现象,寻找合适的技术点,使用自己能运用且无风险的方案解决问题,测试后方可使用



     

如对本文有疑问, 点击进行留言回复!!

相关文章:

验证码:
移动技术网