mb data in one cell, using ora2pg ???
Source is oracle db and target is postgres db??
Amit, so I assume you have BLOB data in Oracle. Beware the 1GB limit for a single field. And I don't know if it means the ENCODED value (hex byte representation as ascii is bigger than required storage). So, my answer. Find your largest record in oracle. Create the table in PG. and load that row of data into your PG (maybe a FDW, maybe extract it from Oracle, and save it in the \COPY format). Push that data into PG. And pull it back out with a query, and run pg_dump and dump that table. If you don't get an OOM (Out of Memory) error, you should be good. If you do, and that's why you are asking... Well then you will have to CHUNK your data into 2 columns. And re-assemble them in the client.
Hi Krik, Thanx for your response. Blob size is 510 mb. And its in readable format not in hex format. I have loaded that single cell into postgres in bytea datatype. To do it manually i used pg_read_file(/abc/xyz/bla.sql) :: bytea ; And it loaded successfully. I am imagining that, when we read a file, it requires less memory because it breaks the data into chunks and it loads into hard drive. When I tried to load the same data using psql -f filename.sql I got error - ERROR : invalid memory allocated request size 1073741824. Note - 1073741824 is equivalent to 1.1 gb. My question is - 1. We got memory error.Does it mean, shared_memory unable to allocate 1.1 gb ?? 2. As data is just 504 mb. And its asking for 1.1 gb allocation in ram. I.e double the size.. Does that mean we require 2 times memory allocation in ram when we write into hard disk? And I will try with your suggestion FDW .
Okay, this means you need to spilt it into chunks. The max a single field can be is 1gb from the query. Can you try taking only 1/2 the size of the bytes? Although it will still break pg_dump... Can you load 1/2 into one column and the rest into another? Fdw won't help.(I am guessing) it has the same limit.
> because it breaks the data into chunks No, it does not. It works just because it's binary, at every stage of the process. > Does it mean, shared_memory unable to allocate 1.1 gb ?? No, it does not — a) shared_memory is irrelevant here (not used for these allocations at all), and b) it's never re-allocated after the server start, anyway. > Does that mean we require 2 times memory allocation in ram when we write into hard disk? It means nothing like that. The problem here is that size of the text representation of the binary value (which is a string in hex) is 2x the value, at the maximum allocation PostgreSQL is capable of for any scalar value is 1GB. Therefore, you won't be able to insert or retrieve such values using text [client-server] protocol, period. Binary format in client-server protocol works just fine, but the usual PostgreSQL tools (psql, pg_dump and pg_restore) simply don't support it (for obvious reasons — it would be needed to convert those values to text, anyway).
Data is only in one field . I am not aware how to divide the data into half. May you send any link . So, I can read that and do as per your words.
What's the actual problem, though? I mean, most "naïve" tools won't be able to read/write that no matter what you do, but any application in a PL which has a library to interact with PostgreSQL which uses binary format in the protocol (most modern libraries are capable, they say) will have no problem whatsoever.
Hi yaroslav, Thanx for your reply. > size of the text representation of the binary value is 2x. Thanx.. I got this .. Do we have a book where I can read these concepts you have said. And also concepts related to how memory is allocated while reading a file and how whole process works ???
Problem was during migration. I am migrating structure ans data from oracle to postgres. But ora2pg unable to migrate oracld BLOB to postgres bytea. Because size of one field is around 504 mb. So, I approached to load the data manually. And Problem resolved. I asked 2 questions here to know why my other approaches didnt work.. And you answered it.
Well, it's mentioned (look for "1 GB") here, here and here, but it's not an explicit statement, indeed. Perhaps, you could write to -docs (or -general) and ask it to be better documented?
Hmm... why don't you file a bug report into ora2pg tracker, too (not sure how it works, though — if it, say, uses text-format COPY, they won't be able to fix that)?
Good idea. I will do that.
Thanx. I am checking it
Обсуждают сегодня