169 похожих чатов

Hello I'm performing ora2pg migration there is blob data

types in oracle, when I importing insert script in to postgres which is generated by ora2pg I'm getting this error "invalid memory alloc request size 1073741824"...I know we have maximum size of field in postgres is 1gb...is there any way to avoid this issue ?

6 ответов

30 просмотров

Reduce the sice in Oracle. If you really need that size of data, split it into several records and put it together again in your application.

N@R£N- Автор вопроса
Stefanie Janine Stölting
Reduce the sice in Oracle. If you really need that...

I did it in oracle, we have problem with one row so I created a new table with that one row.even that one row size is more than 1GB

N@R£N
I did it in oracle, we have problem with one row s...

Check if that data is really needed would be my first advice. And if yes, I would split it into smaller chunks than the maximum column size.

N@R£N- Автор вопроса
Stefanie Janine Stölting
Check if that data is really needed would be my fi...

I have one doubt how can we split that particular row as it blob data I can see only binary values

N@R£N
I have one doubt how can we split that particular ...

Split the data stream in the application. And for binary data the default advice is to store it outside of the database and only store paths within the db.

N@R£N
I have one doubt how can we split that particular ...

so, in PG we store that as Bytea right? In PG, if I split it into multiple rows of binary data on insert. Then I return all of those rows, and in my application, I assemble them back as one field. Yes, this requires a change to your application. The other alternative is to store it as a file somewhere. We had to implement this very thing, for the same reason. And we had 2 rows that exceeded the limit. It was a bit of a pain. But since we knew conversion was a 2yr process for us, we decided early on, that things like this we would REDESIGN in Oracle to make the conversion to PG more seamless. (Like rewriting (+) queries to proper JOIN queries, and fixing "Quoted" DB Field Names, etc. etc.) This way, the conversion got easier/cleaner over time. Because we could stabilize the fixes in PG. And manually adjust the data for now, but still have an Automatable process for future conversion of data.

Похожие вопросы

Обсуждают сегодня

Господа, а что сейчас вообще с рынком труда на делфи происходит? Какова ситуация?
Rꙮman Yankꙮvsky
29
А вообще, что может смущать в самой Julia - бы сказал, что нет единого стандартного подхода по многим моментам, поэтому многое выглядит как "хаки" и произвол. Короче говоря, с...
Viktor G.
2
30500 за редактор? )
Владимир
47
а через ESC-код ?
Alexey Kulakov
29
Чёт не понял, я ж правильной функцией воспользовался чтобы вывести отладочную информацию? но что-то она не ловится
notme
18
У меня есть функция где происходит это: write_bit(buffer, 1); write_bit(buffer, 0); write_bit(buffer, 1); write_bit(buffer, 1); write_bit(buffer, 1); w...
~
14
Добрый день! Скажите пожалуйста, а какие программы вы бы рекомендовали написать для того, чтобы научиться управлять памятью? Можно написать динамический массив, можно связный ...
Филипп
7
Недавно Google Project Zero нашёл багу в SQLite с помощью LLM, о чём достаточно было шумно в определённых интернетах, которые сопровождались рассказами, что скоро всех "ибешни...
Alex Sherbakov
5
Ребят в СИ можно реализовать ООП?
Николай
33
https://github.com/erlang/otp/blob/OTP-27.1/lib/kernel/src/logger_h_common.erl#L174 https://github.com/erlang/otp/blob/OTP-27.1/lib/kernel/src/logger_olp.erl#L76 15 лет назад...
Maksim Lapshin
20
Карта сайта