Category数据仓库

利用快照(物化视图)同步数据

为了维护整个系统应用表的一致性,可能客户会提出了这样的业务需求,对于系统的应用表采用统一维护,即在一台服务器上维护。例如在位置1(数据库rptdw,用户kr)上维护应用表,其他位置(数据库etldb1,用户etlmon和数据库etldb2,用户etlmon)可以直接使用这些应用表,也就是说在位置1具有对应用表插入、删除和更新的能力,而在其他地方只能有查询的能力。

snapshot快照正好满足此要求,注意此快照非ORA-01555报错里的快照过旧的快照
而是SNAPSHOT,9i以后叫物化视图
Continue reading

OCM考试中涉及到的数据仓库知识点整理

OCM考试中涉及到的数据仓库知识点整理

ORACLE官网提供的考纲如下:
Data Warehouse Management

[  ]         Create external tables (Data Loader and Data Pump drivers)
[  ]         Implement Data Pump export and import jobs for data transfer
[  ]         Implement Data Pump to and from remote databases
[  ]         Implement Replication/Streams
[  ]         Configure and manage master replication
[  ]         Configure and manage distributed materialized views
[  ]         Configure and manage Streams for replication

其实涉及到的知识点只有:

外部表,数据泵,流复制,快速刷新物化视图,可更新物化视图,这几个知识点其实不难,但是包含的内容很多,容易出错的有物化视图和流复制,后面我会花点时间做实验,把重点再整理一下。

ORACLE10g OCM 考试流程图

ORACLE10g OCM 考试流程图

 

数据仓库海量数据表拆分和并行

数据仓库海量数据表拆分和并行
表拆分
1.数据仓库海量数据计算解决思路:表拆分
起因:单个表数据量太大,计算时间太长,需要优化

2.表拆分的实际应用思路:
按照某列,做HASH计算,得到一个整数
oracle提供了包dbms_utility
function get_hash_value(name varchar2, base number, hash_size number)
return number
根据键值把数据插入到不同的表,分别计算,然后再汇总。
或者自己定义拆分的计算函数,例如,利用mod函数,利用数字列的尾号等等,只要能把数据均匀分开即可。

Continue reading

sql loader使用例子

一:sql loader 的特点
oracle自己带了很多的工具可以用来进行数据的迁移、备份和恢复等工作。但是每个工具都有自己的特点。
比如说exp和imp可以对数据库中的数据进行导出和导出的工作,是一种很好的数据库备份和恢复的工具,因此主要用在数据库的热备份和恢复方面。有着速度快,使用简单,快捷的优点;同时也有一些缺点,比如在不同版本数据库之间的导出、导入的过程之中,总会出现这样或者那样的问题,这个也许是oracle公司自己产品的兼容性的问题吧。
sql loader 工具却没有这方面的问题,它可以把一些以文本格式存放的数据顺利的导入到oracle数据库中,是一种在不同数据库之间进行数据迁移的非常方便而且通用的工具。缺点就速度比较慢,另外对blob等类型的数据就有点麻烦了。
二:sql loader 的帮助
C:\>sqlldr
SQL*Loader: Release 9.2.0.1.0 – Production on 星期六 10月 9 14:48:12 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
用法: SQLLDR keyword=value [,keyword=value,…]
有效的关键字:
userid — ORACLE username/password
control — Control file name
log — Log file name
bad — Bad file name
data — Data file name
discard — Discard file name
discardmax — Number of discards to allow (全部默认)
skip — Number of logical records to skip (默认0)
load — Number of logical records to load (全部默认)
errors — Number of errors to allow (默认50)
rows — Number of rows in conventional path bind array or between direct p
ath data saves
(默认: 常规路径 64, 所有直接路径)
bindsize — Size of conventional path bind array in bytes(默认256000)
silent — Suppress messages during run (header,feedback,errors,discards,part
itions)
direct — use direct path (默认FALSE)
parfile — parameter file: name of file that contains parameter specification
s
parallel — do parallel load (默认FALSE)
file — File to allocate extents from
skip_unusable_indexes — disallow/allow unusable indexes or index partitions(默
认FALSE)
skip_index_maintenance — do not maintain indexes, mark affected indexes as unus
able(默认FALSE)
readsize — Size of Read buffer (默认1048576)
external_table — use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE(
默认NOT_USED)
columnarrayrows — Number of rows for direct path column array(默认5000)
streamsize — Size of direct path stream buffer in bytes(默认256000)
multithreading — use multithreading in direct path
resumable — enable or disable resumable for current session(默认FALSE)
resumable_name — text string to help identify resumable statement
resumable_timeout — wait time (in seconds) for RESUMABLE(默认7200)
date_cache — size (in entries) of date conversion cache(默认1000)
PLEASE NOTE: 命令行参数可以由位置或关键字指定
。前者的例子是 ‘sqlload
scott/tiger foo’; 后一种情况的一个示例是 ‘sqlldr control=foo
userid=scott/tiger’.位置指定参数的时间必须早于
但不可迟于由关键字指定的参数。例如,
允许 ‘sqlldr scott/tiger control=foo logfile=log’, 但是
不允许 ‘sqlldr scott/tiger control=foo log’, 即使
参数 ‘log’ 的位置正确。
C:\>
三:sql loader使用例子
a)SQLLoader将 Excel 数据导出到 Oracle
1.创建SQL*Loader输入数据所需要的文件,均保存到C:\,用记事本编辑:
控制文件:input.ctl,内容如下:
load data           –1、控制文件标识
infile ‘test.txt’       –2、要输入的数据文件名为test.txt
append into table test    –3、向表test中追加记录
fields terminated by X’09’  –4、字段终止于X’09’,是一个制表符(TAB)
(id,username,password,sj)   —–定义列对应顺序
a、insert,为缺省方式,在数据装载开始时要求表为空
b、append,在表中追加新记录
c、replace,删除旧记录,替换成新装载的记录
d、truncate,同上
在DOS窗口下使用SQL*Loader命令实现数据的输入
C:\>sqlldr userid=system/manager control=input.ctl
默认日志文件名为:input.log
默认坏记录文件为:input.bad
2.还有一种方法
可以把EXCEL文件另存为CSV(逗号分隔)(*.csv),控制文件就改为用逗号分隔
LOAD DATA
INFILE ‘d:\car.csv’
APPEND INTO TABLE t_car_temp
FIELDS TERMINATED BY “,”
(phoneno,vip_car)
b)在控制文件中直接导入数据
1、控制文件test.ctl的内容
— The format for executing this file with SQL Loader is:
— SQLLDR control= Be sure to substitute your
— version of SQL LOADER and the filename for this file.
LOAD DATA
INFILE *
BADFILE ‘C:\Documents and Settings\Jackey\桌面\WMCOUNTRY.BAD’
DISCARDFILE ‘C:\Documents and Settings\Jackey\桌面\WMCOUNTRY.DSC’
INSERT INTO TABLE EMCCOUNTRY
Fields terminated by “;” Optionally enclosed by ‘”‘
(
COUNTRYID NULLIF (COUNTRYID=”NULL”),
COUNTRYCODE,
COUNTRYNAME,
CONTINENTID NULLIF (CONTINENTID=”NULL”),
MAPID NULLIF (MAPID=”NULL”),
CREATETIME DATE “MM/DD/YYYY HH24:MI:SS” NULLIF (CREATETIME=”NULL”),
LASTMODIFIEDTIME DATE “MM/DD/YYYY HH24:MI:SS” NULLIF (LASTMODIFIEDTIME=”NULL”)
)
BEGINDATA
1;”JP”;”Japan”;1;9;”09/16/2004 16:31:32″;NULL
2;”CN”;”China”;1;10;”09/16/2004 16:31:32″;NULL
3;”IN”;”India”;1;11;”09/16/2004 16:31:32″;NULL
4;”AU”;”Australia”;6;12;”09/16/2004 16:31:32″;NULL
5;”CA”;”Canada”;4;13;”09/16/2004 16:31:32″;NULL
6;”US”;”United States”;4;14;”09/16/2004 16:31:32″;NULL
7;”MX”;”Mexico”;4;15;”09/16/2004 16:31:32″;NULL
8;”GB”;”United Kingdom”;3;16;”09/16/2004 16:31:32″;NULL
9;”DE”;”Germany”;3;17;”09/16/2004 16:31:32″;NULL
10;”FR”;”France”;3;18;”09/16/2004 16:31:32″;NULL
11;”IT”;”Italy”;3;19;”09/16/2004 16:31:32″;NULL
12;”ES”;”Spain”;3;20;”09/16/2004 16:31:32″;NULL
13;”FI”;”Finland”;3;21;”09/16/2004 16:31:32″;NULL
14;”SE”;”Sweden”;3;22;”09/16/2004 16:31:32″;NULL
15;”IE”;”Ireland”;3;23;”09/16/2004 16:31:32″;NULL
16;”NL”;”Netherlands”;3;24;”09/16/2004 16:31:32″;NULL
17;”DK”;”Denmark”;3;25;”09/16/2004 16:31:32″;NULL
18;”BR”;”Brazil”;5;85;”09/30/2004 11:25:43″;NULL
19;”KR”;”Korea, Republic of”;1;88;”09/30/2004 11:25:43″;NULL
20;”NZ”;”New Zealand”;6;89;”09/30/2004 11:25:43″;NULL
21;”BE”;”Belgium”;3;79;”09/30/2004 11:25:43″;NULL
22;”AT”;”Austria”;3;78;”09/30/2004 11:25:43″;NULL
23;”NO”;”Norway”;3;82;”09/30/2004 11:25:43″;NULL
24;”LU”;”Luxembourg”;3;81;”09/30/2004 11:25:43″;NULL
25;”PT”;”Portugal”;3;83;”09/30/2004 11:25:43″;NULL
26;”GR”;”Greece”;3;80;”09/30/2004 11:25:43″;NULL
27;”IL”;”Israel”;1;86;”09/30/2004 11:25:43″;NULL
28;”CH”;”Switzerland”;3;84;”09/30/2004 11:25:43″;NULL
29;”A1″;”Anonymous Proxy”;0;0;”09/30/2004 11:25:43″;NULL
30;”A2″;”Satellite Provider”;0;0;”09/30/2004 11:25:43″;NULL
31;”AD”;”Andorra”;3;0;”09/30/2004 11:25:43″;NULL
32;”AE”;”United Arab Emirates”;1;0;”09/30/2004 11:25:43″;NULL
33;”AF”;”Afghanistan”;1;0;”09/30/2004 11:25:43″;NULL
34;”AG”;”Antigua and Barbuda”;7;0;”09/30/2004 11:25:43″;NULL
35;”AI”;”Anguilla”;7;0;”09/30/2004 11:25:43″;NULL
36;”AL”;”Albania”;3;0;”09/30/2004 11:25:43″;NULL
37;”AM”;armenia”;3;0;”09/30/2004 11:25:43″;NULL
38;”AN”;”Netherlands Antilles”;3;0;”09/30/2004 11:25:43″;NULL
39;”AO”;”Angola”;2;0;”09/30/2004 11:25:43″;NULL
40;”AP”;”Asia/Pacific Region”;2;0;”09/30/2004 11:25:43″;NULL
41;”AQ”;”Antarctica”;8;0;”09/30/2004 11:25:43″;NULL
42;”AR”;”Argentina”;5;0;”09/30/2004 11:25:43″;NULL
43;”AS”;”American Samoa”;6;0;”09/30/2004 11:25:43″;NULL
44;”AW”;”Aruba”;5;0;”09/30/2004 11:25:43″;NULL
45;”AZ”;”Azerbaijan”;1;0;”09/30/2004 11:25:43″;NULL
46;”BA”;”Bosnia and Herzegovina”;3;0;”09/30/2004 11:25:43″;NULL
47;”BB”;”Barbados”;5;0;”09/30/2004 11:25:43″;NULL
48;”BD”;”Bangladesh”;1;0;”09/30/2004 11:25:43″;NULL
49;”BF”;”Burkina Faso”;2;0;”09/30/2004 11:25:43″;NULL
50;”BG”;”Bulgaria”;3;0;”09/30/2004 11:25:43″;NULL
51;”BH”;”Bahrain”;1;0;”09/30/2004 11:25:43″;NULL
52;”BI”;”Burundi”;2;0;”09/30/2004 11:25:43″;NULL
53;”BJ”;”Benin”;2;0;”09/30/2004 11:25:43″;NULL
54;”BM”;”Bermuda”;4;0;”09/30/2004 11:25:43″;NULL
55;”BN”;”Brunei Darussalam”;1;0;”09/30/2004 11:25:43″;NULL
56;”BO”;”Bolivia”;5;0;”09/30/2004 11:25:43″;NULL
57;”BS”;”Bahamas”;7;0;”09/30/2004 11:25:43″;NULL
58;”BT”;”Bhutan”;1;0;”09/30/2004 11:25:43″;NULL
59;”BV”;”Bouvet Island”;5;0;”09/30/2004 11:25:43″;NULL
60;”BW”;”Botswana”;2;0;”09/30/2004 11:25:43″;NULL
61;”BY”;”Belarus”;3;0;”09/30/2004 11:25:43″;NULL

 

2、执行导入命令
C:\>sqlldr userid=system/manager control=test.ctl
SQL*Loader是Oracle数据库导入外部数据的一个工具.它和DB2的Load工具相似,但有更多的选择,它支持变化的加载模式,可选的加载及多表加载.
如何使用 SQL*Loader 工具
我们可以用Oracle的sqlldr工具来导入数据。例如:
sqlldr scott/tiger control=loader.ctl
控制文件(loader.ctl) 将加载一个外部数据文件(含分隔符). loader.ctl如下:
load data
infile ‘c:\data\mydata.csv’
into table emp
fields terminated by “,” optionally enclosed by ‘”‘
( empno, empname, sal, deptno )
mydata.csv 如下:
10001,”Scott Tiger”, 1000, 40
10002,”Frank Naude”, 500, 20
下面是一个指定记录长度的示例控制文件。”*” 代表数据文件与此文件同名,即在后面使用BEGINDATA段来标识数据。
load data
infile *
replace
into table departments
( dept position (02:05) char(4),
deptname position (08:27) char(20)
)
begindata
COSC COMPUTER SCIENCE
ENGL ENGLISH LITERATURE
MATH MATHEMATICS
POLY POLITICAL SCIENCE
Unloader这样的工具
Oracle 没有提供将数据导出到一个文件的工具。但是,我们可以用SQL*Plus的select 及 format 数据来输出到一个文件:
set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on
spool oradata.txt
select col1 || ‘,’ || col2 || ‘,’ || col3
from tab1
where col2 = ‘XYZ’;
spool off
另外,也可以使用使用 UTL_FILE PL/SQL 包处理:
rem Remember to update initSID.ora, utl_file_dir=’c:\oradata’ parameter
declare
fp utl_file.file_type;
begin
fp := utl_file.fopen(‘c:\oradata’,’tab1.txt’,’w’);
utl_file.putf(fp, ‘%s, %s\n’, ‘TextField’, 55);
utl_file.fclose(fp);
end;
/
当然你也可以使用第三方工具,如SQLWays ,TOAD for Quest等。
加载可变长度或指定长度的记录
如:
LOAD DATA
INFILE *
INTO TABLE load_delimited_data
FIELDS TERMINATED BY “,” OPTIONALLY ENCLOSED BY ‘”‘
TRAILING NULLCOLS
( data1,
data2
)
BEGINDATA
11111,AAAAAAAAAA
22222,”A,B,C,D,”
下面是导入固定位置(固定长度)数据示例:
LOAD DATA
INFILE *
INTO TABLE load_positional_data
( data1 POSITION(1:5),
data2 POSITION(6:15)
)
BEGINDATA
11111AAAAAAAAAA
22222BBBBBBBBBB
跳过数据行:
可以用 “SKIP n” 关键字来指定导入时可以跳过多少行数据。如:
LOAD DATA
INFILE *
INTO TABLE load_positional_data
SKIP 5
( data1 POSITION(1:5),
data2 POSITION(6:15)
)
BEGINDATA
11111AAAAAAAAAA
22222BBBBBBBBBB
导入数据时修改数据:
在导入数据到数据库时,可以修改数据。注意,这仅适合于常规导入,并不适合 direct导入方式.如:
LOAD DATA
INFILE *
INTO TABLE modified_data
( rec_no “my_db_sequence.nextval”,
region CONSTANT ’31’,
time_loaded “to_char(SYSDATE, ‘HH24:MI’)”,
data1 POSITION(1:5) “:data1/100”,
data2 POSITION(6:15) “upper(:data2)”,
data3 POSITION(16:22)”to_date(:data3, ‘YYMMDD’)”
)
BEGINDATA
11111AAAAAAAAAA991201
22222BBBBBBBBBB990112
LOAD DATA
INFILE ‘mail_orders.txt’
BADFILE ‘bad_orders.txt’
APPEND
INTO TABLE mailing_list
FIELDS TERMINATED BY “,”
( addr,
city,
state,
zipcode,
mailing_addr “decode(:mailing_addr, null, :addr, :mailing_addr)”,
mailing_city “decode(:mailing_city, null, :city, :mailing_city)”,
mailing_state
)
将数据导入多个表:
如:
LOAD DATA
INFILE *
REPLACE
INTO TABLE emp
WHEN empno != ‘ ‘
( empno POSITION(1:4) INTEGER EXTERNAL,
ename POSITION(6:15) CHAR,
deptno POSITION(17:18) CHAR,
mgr POSITION(20:23) INTEGER EXTERNAL
)
INTO TABLE proj
WHEN projno != ‘ ‘
( projno POSITION(25:27) INTEGER EXTERNAL,
empno POSITION(1:4) INTEGER EXTERNAL
)
导入选定的记录:
如下例: (01) 代表第一个字符, (30:37) 代表30到37之间的字符:
LOAD DATA
INFILE ‘mydata.dat’ BADFILE ‘mydata.bad’ DISCARDFILE ‘mydata.dis’
APPEND
INTO TABLE my_selective_table
WHEN (01) <> ‘H’ and (01) <> ‘T’ and (30:37) = ‘19991217’
(
region CONSTANT ’31’,
service_key POSITION(01:11) INTEGER EXTERNAL,
call_b_no POSITION(12:29) CHAR
)
导入时跳过某些字段:
可用 POSTION(x:y) 来分隔数据. 在Oracle8i中可以通过指定 FILLER 字段实现。FILLER 字段用来跳过、忽略导入数据文件中的字段.如:
LOAD DATA
TRUNCATE INTO TABLE T1
FIELDS TERMINATED BY ‘,’
( field1,
field2 FILLER,
field3
)
导入多行记录:
可以使用下面两个选项之一来实现将多行数据导入为一个记录:
CONCATENATE: – use when SQL*Loader should combine the same number of physical records together to form one logical record.
CONTINUEIF – use if a condition indicates that multiple records should be treated as one. Eg. by having a ‘#’ character in column 1.
SQL*Loader 数据的提交:
一般情况下是在导入数据文件数据后提交的。
也可以通过指定 ROWS= 参数来指定每次提交记录数。
提高 SQL*Loader 的性能:
1) 一个简单而容易忽略的问题是,没有对导入的表使用任何索引和/或约束(主键)。如果这样做,甚至在使用ROWS=参数时,会很明显降低数据库导入性能。
2) 可以添加 DIRECT=TRUE来提高导入数据的性能。当然,在很多情况下,不能使用此参数。
3) 通过指定 UNRECOVERABLE选项,可以关闭数据库的日志。这个选项只能和 direct 一起使用。
4) 可以同时运行多个导入任务.
常规导入与direct导入方式的区别:
常规导入可以通过使用 INSERT语句来导入数据。Direct导入可以跳过数据库的相关逻辑(DIRECT=TRUE),而直接将数据导入到数据文件中

oracle10g sqlloader参数详细说明

在cmd中输入sqlldr显示所有的参数如下
USERID={username[/password][@net_service_name]|/}
CONTROL=control_file_name

LOG=path_file_name
BAD=path_file_name
DATA=path_file_name
DISCARD=path_file_name
DISCARDMAX=logical_record_count
SKIP=logical_record_count
SKIP_INDEX_MAINTENANCE={TRUE | FALSE}
SKIP_UNUSABLE_INDEXES={TRUE | FALSE}
LOAD=logical_record_count
ERRORS=insert_error_count
ROWS=rows_in_bind_array
BINDSIZE=bytes_in_bind_array
SILENT=[(]keyword[,keyword…][)]
DIRECT={TRUE | FALSE}
PARFILE=path_file_name
PARALLEL={TRUE | FALSE}
READSIZE=bytes_in_read_buffer
FILE=database_datafile_name
详细参数如下e SQL*Loader parameter descriptions are as follows: USERID = {username[/password] [@net_service_name]|/}

Specifies the username and password to use when connecting to the database. The net_service_name parameter optionally allows you to connect to a remote database.

Use a forward-slash character ( / ) to

connect to a local database using operating system authentication.

 

CONTROL = control_ file_name
Specifies the name, which may include the path, of the control file. The default extension is .ctl.

LOG = path_ file_name
Specifies the name of the log file to generate for a load session. You may include a path as well. By default, the log file takes on the name of the control file, but with a .log extension, and is written to the same directory as the control file. If you specify a different name, the default extension is still .log. However, if you use the LOG parameter to specify a name for the log file, it will no longer be written automatically to the directory that contains the control file.

BAD = path_ file_name
Specifies the name of the bad file. You may include a path as part of the name. By default, the bad file takes the name of the control file, but with a .bad extension, and is written to the same directory as the control file. If you specify a different name, the default extension is still .bad. However, if you use the BAD parameter to specify a bad file name, the default directory becomes your current working directory. If you are loading data from multiple files, then this bad file name only gets associated with the first file being loaded.

DATA = path_ file_name
Specifies the name of the file containing the data to load. You may include a path as part of the name. By default, the name of the control file is used, but with the .dat extension. If you specify a different name, the default extension is still .dat. If you are loading from multiple files, you can only specify the first file name using this parameter. Place the names of the other files in their respective INFILE clauses in the control file.

DISCARD = path_ file_name
Specifies the name of the discard file. You may include a path as part of the name. By default, the discard file takes the name of the control file, but it has a .dis extension. If you specify a different name, the default extension is still .dis. If you are loading data from multiple files, then this discard file name only gets associated with the first file being loaded

DISCARDMAX = logical_record_count
Sets an upper limit on the number of logical records that can be discarded before a load will terminate. The limit is actually one less than the value specified for DISCARDMAX. When the number of discarded records becomes equal to the value specified for DISCARDMAX, the load will terminate. The default is to allow an unlimited number of discards. However, since DISCARDMAX only accepts numeric values, it is not possible to explicitly specify the default behavior.

TIP: There is also an undocumented parameter named DISCARDS that functions the same as DISCARDMAX. The use of DISCARDMAX is preferred, but you may occasionally encounter references to DISCARDS

 

 

SKIP = logical_record_count
Allows you to continue an interrupted load by skipping the specified number of logical records. If you are continuing a multiple table direct path load, you may need to use the CONTINUE_LOAD clause in the control file rather than the SKIP parameter on the command line. CONTINUE_LOAD allows you to specify a different number of rows to skip for each table that you are loading.

SKIP_INDEX_MAINTENANCE = {TRUE | FALSE}
Controls whether or not index maintenance is done for a direct path load. This parameter does not apply to conventional path loads. A value of TRUE causes index maintenance to be skipped. Any index segments (partitions) that should have been updated will be marked as unusable. A value of FALSE causes indexes to be maintained as they normally would be. The default is FALSE.

 

SKIP_UNUSABLE_INDEXES = {TRUE | FALSE}
Controls the manner in which a load is done when a table being loaded has indexes in an unusable state. A value of TRUE causes SQL*Loader to load data into tables even when those tables have indexes marked as unusable. The indexes will remain unusable at the end of the load. One caveat is that if a UNIQUE index is marked as unusable, the load will not be allowed to proceed.

A value of FALSE causes SQL*Loader not to insert records when those records need to be recorded in an index marked as unusable. For a conventional path load, this means that any records that require an unusable index to be updated will be rejected as errors. For a direct path load, this means that the load will be aborted the first time such a record is encountered. The default is FALSE.

LOAD = logical_record_count
Specifies a limit on the number of logical records to load. The default is to load all records. Since LOAD only accepts numeric values, it is not possible to explicitly specify the default behavior

ERRORS = insert_error_count
Specifies a limit on the number of errors to tolerate before the load is aborted. The default is to abort a load when the error count exceeds 50. There is no way to allow an unlimited number of errors. The best you can do is to specify a very high number for this parameter

ROWS = rows_in_bind_array
The precise meaning of this parameter depends on whether you are doing a direct path load or a conventional load. If you are doing a conventional load, then you can use this parameter to control the number of rows in the bind array. This represents the number of rows that SQL*Loader loads with each INSERT statement, and also represents the commit frequency. The default is 64 rows.

If you are doing a direct path load, then ROWS specifies the number of rows to read from the input file before saving the data to the database. SQL*Loader will round up the ROWS value to coincide with an even number of database blocks. A data save in a direct path load is analogous to a commit in a conventional path load. The default, when a direct path load is done, is to do one save at the end of the load.
TIP: The BINDSIZE and ROWS parameters both affect the size of the bind array. Chapter 9 discusses this topic in greater detail

 

BINDSIZE = bytes_in_bind_array
Specifies the maximum size, in bytes, of the bind array. This parameter overrides any bind array size computed as a result of using the ROWS parameter. The default bind array size is 65,536 bytes, or 64K

 

SILENT = [( ]keyword [,keyword… ] [ )]
Allows you to suppress various header and feedback messages that SQL*Loader normally displays during a load session. Table 1-2 describes the effect of each of the keywords.

 

There are two ways you can specify values for the SILENT parameter. If you have only one keyword, you can supply it following the equals sign (=), as follows:

SILENT = ALL

If you have several keywords to use, you can place them in a comma-delimited list. You may optionally place that list inside parentheses. For example:

SILENT = (DISCARDS,ERRORS)
DIRECT = {TRUE | FALSE}

Determines the data path used for the load. A value of FALSE results in a conventional path load. A value of TRUE results in a direct path load. The default is FALSE.
PARFILE = path_ file_name

Tells SQL*Loader to read command-line parameter values from a text file. This text file is referred to as a parameter file, and contains keyword/value pairs. Usually, the keyword/value pairs are separated by line breaks. Use of the PARFILE parameter can save a lot of typing if you need to perform the same load several times, because you won’t need to retype all the command-line parameters each time. There is no default extension for parameter files.
PARALLEL = {TRUE | FALSE}

Indicates whether or not you are doing a direct path parallel load. If you are loading the same object from multiple direct path load sessions, then set this to TRUE. Otherwise, set it to FALSE. The default is FALSE.
READSIZE = bytes_in_read_buffer

Specifies the size of the buffer used by SQL*Loader when reading data from the input file. The default value is 65,536 bytes, or 64K. The values of the READSIZE and BINDSIZE parameters should match. If you supply values for these two parameters that do not match, SQL*Loader will adjust them.
FILE = database_datafile_name

Specifies the database data file from which to allocate extents. Use this parameter when doing parallel loads, to ensure that each load session is using a different disk. If you are not doing a direct path load, this parameter will be ignored.
In addition to being passed by keyword, parameters may also be passed by position. To do this, you simply list the values after the sqlldr command in the correct order. For example, the following two SQL*Loader commands yield identical results:

sqlldr system/manager profile.ctl profile.log
sqlldr userid=system/manager control=profile.ctl log=profile.log

You can even mix the positional and keyword methods of passing command-line parameters. The one rule when doing this is that all positional parameters must come first. Once you start using keywords, you must continue to do so. For example:

sqlldr system/manager control=profile.ctl log=profile.ctl

When you pass parameters positionally, you must not skip any. Also, be sure to get the order right. You must supply parameter values in the order shown earlier in this section. Given the fact that you typically will use only a few parameters out of the many that are possible, it’s usually easier to pass those parameters as keyword/value pairs than it is to pass them positionally. Using keyword/value pairs also makes long SQL*Loader commands somewhat self-documenting. The one exception to this rule is that you might wish to pass the username and password positionally, since they come first, and then pass in the rest of the parameters by name.

 

 

Parameters on the command line may be separated by spaces, by commas, or by both spaces and commas. All three of the following commands for example, are legitimate:

sqlldr system/manager,control=product.ctl,log=product.log
sqlldr system/manager, control=product.ctl, log=product.log
sqlldr system/manager control=product.ctl log=product.log

Spaces are acceptable as well, on either side of the equals sign (=), in keyword/value pairs.

Special characters are rarely needed on the command line, but when you do use them in an option value, you must enclose that value within quotes. For example, beginning with release 8.1.7, if you connect as the user SYS, you also need to specify “AS SYSDBA” as part of your connect string. Because of the spaces, you’ll need to enclose your entire connect string within quotes. For example, for Windows:

sqlldr ‘sys/password AS SYSDBA’ control=product.ctl (Windows)

© 2019 Init dba

Theme by Anders NorenUp ↑