'DB'에 해당되는 글 5건

  1. 2010.11.01 ACID 란?
  2. 2010.08.02 mysql timestamp to datetime
  3. 2010.07.14 mysql date time to unix time stamp
  4. 2009.11.23 Mysql password, old_password
  5. 2009.08.26 oracle lob data insert lock reason
DB2010. 11. 1. 16:06

ACID는 원자성(Automicity), 일관성(Consistency), 독립성(Isolation), 내구성(Durability)의 대문자를 의미합니다
1. 원자성(Automicity)
하나의 트랜잭션이 완료되었을 때의 데이터는 해당 트랜잭션의 처리를 모두 완료한 후의 상태이거나 처리를 받아 들이기 이전의 상태이어야 한다. 즉 데이터의 갱신은 All or Nothing개념이어야 한다는 것이다

2.일관성(Consistency)
트랜잭션의 성립, 미성립에 불문하고 데이터의 내용은 일관성이 유지되어야 한다. 예를 들면 은행의 경우 접수계별 거래 합계와 지점 전체의 거래 합계가 동일해야 한다는 것이다. 처리 도중의 순서에 관계없이 트랜잭션 처리 결과는 일정해야 한다

3.독립성(Isolation)
어플리케이션과 데이터를 분산하여 지역 서버에 위치시키면 원거리 시스템에 접속, 통신할 필요성이 줄어듬으로 적절한 응답속도를 보장할 수 있습니다. 또한, 지역적으로 분산되어 있는 시스템 간에 서로 백업체계를 갖추도록 하여 시스템의 가용성을 높일 수 있습니다

4.내구성(Durability)
트랜잭션이 일단 종료한 후에는 장애 등으로 데이터의 내용이 변경되지 않아야 한다 
Posted by 크롭바디
DB2010. 8. 2. 15:41

select FROM_UNIXTIME(UNIX_TIMESTAMP(now()));
Posted by 크롭바디
DB2010. 7. 14. 17:17

SELECT UNIX_TIMESTAMP('2005-03-27 03:00:00');

datetime 형식으로 저장되어 있는 날짜 비교시... 응용하시오.
Posted by 크롭바디
DB2009. 11. 23. 16:26
Mysql 에는 password 함수가 있어서 간단히 암호화 하여 저장 할 수 있다.
Mysql 4.1 버전 이전에 제공되던 password 함수는 그후에 old_password 로 변경되었으므로
4.1 이전 버전에서 5버전 이상으로 버전업시 사용 가능하다.

또한 ....위 두 함수가 같은 결과를 내뿜을 경우...
Mysql 설정파일 /etc/my.cnf 파일의 old_password = 0 으로 설정

뽀나스 패스워드 해시 C코드
Posted by 크롭바디
DB/ORACLE2009. 8. 26. 20:12

오라클 테이블에 LOB(Large Object) 데이터를 인서트 하기 위해서는 약간의 특이한 과정이 필요하다.

Why "lock new row" ?
LOB data is not stored in the table proper,
and is not even necessarily stored in the database per se.
LOB data can be stored anywhere, even in a file accessed via a network file system.
It is therefore outside the scope of the conventional database locking mechanism.
Under such conditions it might be quite a lengthy process to obtain a lock,
so for performance reasons LOB locks are not obtained unless you explicitly request them.

왜 LOB lock을 거는가..

LOB data는 테이블에 제대로 저장되지 않으며, database 자체에 저장할 필요도 없다.
LOB data는 어디에나 저장될 수 있으며, 네트웍을 통한 파일시스템에있는 파일로도 가능하다.
이것은 database locking mechanism의 scope 밖에 있다고 보는것이다.
이와같은 조건에서 lock상황을 만들기 위해 긴 프로세스가 필요하며,
명시적으로 lock을 요청하지 않을경우 LOB lock은 획득할 수 없다.


String
sqlNewRow =
"INSERT INTO BLOB_TABLE (BLOB_ID,BLOB_DATA) "
"VALUES (?,EMPTY_BLOB())",
sqlLockRow =
"SELECT BLOB_DATA FROM BLOB_TABLE "

"WHERE BLOB_ID = ? FOR UPDATE",
sqlSetBlob =
"UPDATE BLOB_TABLE "
"SET BLOB_DATA = ? "

"WHERE BLOB_ID = ?";
Connection cxn = DriverManager.getConnection(
connect_string,username_string,password_string);
cxn.setAutoCommit(false);
//make new row
PreparedStatement ps = cxn.prepareStatement(sqlNewRow);
ps.setString(1,SomeValue);
ps.executeUpdate();
//lock new row
ps = cxn.prepareStatement(sqlLockRow);
ps.setString(1,SomeValue);
ResultSet rs = ps.executeQuery();
rs.next();



http://forums.sun.com/thread.jspa?threadID=349880  에서 인용함..

Abstract
Discusses JDBC handling of LOB columns (CLOB, BLOB) in the context of Oracle 8i, but is worth reading irrespective of DB server platform.

Introduction
If you are reading this page you have undoubtedly recently encountered this error while trying to write data into a LOB column.

ORA-01461: can bind a LONG value only for insert into a LONG column
This document explains what the error means and what to do about it.

LOB - an extra level of indirection
It is very important to understand the difference between LONG data types and LOB data types.

LONG data types (LONGVARBINARY, LONGVARCHAR etc) are the original kind of blob. Because the LONG data is fetched with each row, this makes each fetched row potentially very large. It uses up the fetch buffer very quickly and makes the database server much less efficient because far fewer rows will fit in memory.

Columns of LOB data types (BLOB and CLOB etc) are a newer kind of blob and they work differently. LOB stands for Locator OBject - row fetches do not return the data from LOB columns, but instead return an object that knows how to get the data. This vastly reduces the amount of fetch buffer used by each row and greatly improves database server efficiency.

LOB columns are not LONG columns. The error ORA-01461 is simply a type mismatch error because what you have actually tried to do is write a LOB into a LONG column.

This means that you need two steps to get and three steps to set the data. You fetch the LOB and then you ask it for the data. This drill is always the same and could be handled automatically by the database drivers. For fetching it is handled automatically - this is why you can use getAsciiStream() to read a CLOB.

However, use of the LOB is not automatic when writing. Why not? Good question. I don't know. The soon-to-be-released JDBC 3 standard suggests that in future this will be automated.

But it works for small values...
For values under a certain size, a different mechanism appears to be used by the driver. What I think happens is that for small values it simply constructs a SQL statement with embedded literals and passes that, whereas for large values I suspect a temporary stored procedure is used, and this causes stronger type checking.

Writing data using a LOB
First you need a reference to the LOB. Then you write the data to the LOB, which buffers it. Finally, you update the LOB column with the modified LOB, like this:

//Use oracle.sql.BLOB because java.sql.Blob lacks setBytes()
//JDBC3 java.sql.Blob adds the method setBytes(int,byte[])
//Oracle JDBC uses the method putBytes(int,byte[])
oracle.sql.BLOB dbBlob = (oracle.sql.BLOB)rs.getBlob(1);
//update blob
ps = cxn.prepareStatement(sqlSetBlob);
ps.setString(2,SomeValue);
dbBlob.putBytes(1,blob.getRaw());
/
You can't do this:
ps.setBinaryStream(1,blob.getInputStream(),blob.getRaw().length);
You must do it like this:
ps.setBlob(1,dbBlob);
Which is weird because you CAN do this:
InputStream is = rs.getBinaryStream(1);
Note that if the column were declared LONGVARBINARY then
setBinaryStream() would work./
ps.setBlob(1,dbBlob);
cxn.commit();

In the above example, you may be wondering what blob.getRaw() is. This blob object is a helper object that I wrote to contain blob data in memory. The getRaw() method returns a byte[] containing the blob data, which is what you need for the putBytes() method of oracle.sql.BLOB.

Inserting rows that contain LOB columns
This is trickier than doing updates because you can't write LOB data until you have a reference to the LOB, and you can't get a reference to the LOB until after you insert the row. So you start like this.

String
sqlNewRow =
"INSERT INTO BLOB_TABLE (BLOB_ID,BLOB_DATA) "
"VALUES (?,EMPTY_BLOB())",
sqlLockRow =
"SELECT BLOB_DATA FROM BLOB_TABLE "

"WHERE BLOB_ID = ? FOR UPDATE",
sqlSetBlob =
"UPDATE BLOB_TABLE "
"SET BLOB_DATA = ? "

"WHERE BLOB_ID = ?";
Connection cxn = DriverManager.getConnection(
connect_string,username_string,password_string);
cxn.setAutoCommit(false);
//make new row
PreparedStatement ps = cxn.prepareStatement(sqlNewRow);
ps.setString(1,SomeValue);
ps.executeUpdate();
//lock new row
ps = cxn.prepareStatement(sqlLockRow);
ps.setString(1,SomeValue);
ResultSet rs = ps.executeQuery();
rs.next();
The expression EMPTY_BLOB() is Oracle specific. This function manufactures a LOB for insertion into the column. You must do this, because if you don't, when you try to update the row you won't get a LOB, you'll get NULL.

At this point you update the row as described in the preceding section.

Why "lock new row" ?
LOB data is not stored in the table proper, and is not even necessarily stored in the database per se. LOB data can be stored anywhere, even in a file accessed via a network file system. It is therefore outside the scope of the conventional database locking mechanism.

Under such conditions it might be quite a lengthy process to obtain a lock, so for performance reasons LOB locks are not obtained unless you explicitly request them.

Reading LOB data
Although you can use getAsciiStream() or getBinaryStream(), reading LOB data is not entirely straightforward. Before you can retrieve the data, you need to allocate a byte[] buffer sized to accommodate the data. Here's the catch: the stream reports its size according to the amount of data in the JDBC buffer. Before you retrieve the data, this is zero.

It gets worse. You cannot read the data twice, once to get the size and once to actually acquire the data, because the stream is forward-only.

The trick is to allocate 2K buffers and fetch, in a loop, until there is no more data. Then create a single buffer of exact size, and combine the data into it. This is messy, and depends on array manipulation techniques that are not well known, so I wrote a class to retrieve and represent such data, which implements all the required behaviour.

This class also has a getter method that returns a BINHEX string. This is extremely handy if you want to put the data into an XML stream, especially as the class also has a setter method that takes a BINHEX string, for going the other way.


Thanks
Al


Posted by 크롭바디