Just another WordPress.com site

Posts tagged ‘external table’

mysql: Create external table using CSV engine and file

One thing that the CSV engine allows for is the instantaneous data transfer from flat files to information accessible via SQL in MySQL. For example, imagine having the need to load a file containing 5 million records into a MySQL table that uses the MyISAM storage engine:

mysql> desc client_detail;
+-----------------------+---------------+------+-----+---------+-------+
| Field                 | Type          | Null | Key | Default | Extra |
+-----------------------+---------------+------+-----+---------+-------+
| client_transaction_id | decimal(22,0) | YES  |     | NULL    |       |
| transaction_timestamp | datetime      | YES  |     | NULL    |       |
| transaction_comment   | varchar(30)   | YES  |     | NULL    |       |
+-----------------------+---------------+------+-----+---------+-------+
3 rows in set (0.02 sec)

mysql> load data infile '/usr/local/mysql519/data/gim/flatdata.dat' 
    ->  into table client_detail
    ->  fields terminated by ',';
Query OK, 5023575 rows affected, 0 warnings (27.38 sec)
Records: 5023575  Deleted: 0  Skipped: 0  Warnings: 0

While MyISAM accepts the data pretty quickly, you can make all the data in the flat file instantly available to MySQL in the following manner. First, create a CSV table that mirrors the format of the incoming flat file:

mysql> create table client_detail_csv (client_transaction_id decimal(22,0),
    -> transaction_timestamp datetime, transaction_comment varchar(30))
    -> engine=csv;
Query OK, 0 rows affected (0.01 sec)

Then, go to the operating system and simply rename the operating system flat file to the file MySQL created for the new CSV table:

[mysql@linux1 ~]$ cd /usr/local/mysql519/data/gim

[mysql@linux1 gim]$ ls -l
total 785848
-rw-rw----  1 mysql mysql        35 May  1 06:54 client_detail_csv.CSM
-rw-rw----  1 mysql mysql         0 May  1 06:54 client_detail_csv.CSV
-rw-rw----  1 mysql mysql      8718 May  1 06:54 client_detail_csv.frm
-rw-rw----  1 mysql mysql      8718 May  1 05:56 client_detail.frm
-rw-rw----  1 mysql mysql 221037300 May  1 06:00 client_detail.MYD
-rw-rw----  1 mysql mysql      1024 May  1 06:00 client_detail.MYI
-rw-rw----  1 mysql mysql 291367350 May  1 05:55 flatdata.dat

[mysql@linux1 gim]$ mv flatdata.dat client_detail_csv.CSV

The rename operation occurs instantly as nothing data-wise changes at the operating system level. At that point, MySQL has all the data available to it:

mysql> flush tables;
Query OK, 0 rows affected (0.00 sec)
mysql> select count(*) from client_detail_csv;
+----------+
| count(*) |
+----------+
|  5023575 |
+----------+
1 row in set (16.17 sec)

Thus, you’ve effectively loaded 5 million records instantaneously into MySQL. The great thing is the same effect would be experienced if the file contained 10 million, 20 million, or 100 million records.

oracle : external table

1. in browser type url ip-Of-Server:1158/em/console/

  1. in browser type url ip-Of-Server:1158/em/console/
  2. login there
  3. go to Schema-> Directory Objects
  4. click the “create” Button
  5. fill in the form
  6. * Name DIR_WLIST
    * Path
    /tmp/test_v_dir

    where Name= name of virtual directory and Path= path to the directory

  7. now thru ftp client (or putty ) create a file test.txt in /tmp/test_v_dir/
  8. then go to sqlplus
  9. type in follwing query
  10. create table XTERN_test( student_name varchar2(100) )
    organization external (
    type oracle_loader
    default directory DIR_WLIST
    access parameters (
    records delimited by newline
    )
    location (‘test.txt’)
    )
    reject limit unlimited;
%d bloggers like this: