Skip to main content

Large backups fail at around 900gb

Thread solved
Beginner
Posts: 3
Comments: 21

I upgraded my Acronis 2019 to 2020. For the past couple of days I have been trying to backup.

I deleted all of my backups and created new ones so I could use the new .tibx format.

I have 3 drives I am backing up. 1x 512gb SSD which backs up no problem. 1x 3TB HDD and 1x 6TB HDD. These two are giving me issues. I have tried 3 different external drives formatted in NTFS but the backups always fail once they reach around 900GB in size. I never had this issue with 2019.

Now I do not have any valid backups for those two HDD's and I am worried about data loss in case of drive failure.

Can someone please suggest what I should do other than going back to my 2019 license?

 

Thank you,

Patrick.

1 Users found this helpful
Legend
Posts: 81
Comments: 17519

#1

Patrick, your document shows the backup is failing with "Invalid access to memory location" which is an error that I have not seen being reported at all during the ATI 2020 Beta testing program or since.

Please download the MVP Log Viewer tool (link in my signature below) and use this to review the log file for your backup operation to see if there are any further details of this error?

It is also possible that you have an actual memory issue here that is being exposed by the way ATI 2020 processes backups for the new .tibx format?  I would recommend running a full memory test or else if you have multiple memory modules installed, try removing 1 module at a time and repeat the backup to see if the problem continues or is resolved?

Beginner
Posts: 3
Comments: 21

#2

Thanks for the reply, Steve!

Here is what I see in my log for the latest failed backup using the log viewer:

8/24/2019 1:14:40 AM: -----
8/24/2019 1:14:40 AM: ATI Demon started. Version: 24.3.1.20600.
8/24/2019 1:14:40 AM: Backup reserve copy attributes: format tib; need_reserve_backup_copy true;
8/24/2019 1:14:40 AM: Operation ST6000VX0023-2EF110 SC60 started manually.
8/24/2019 1:14:41 AM: Backup reserve copy attributes: format tib; need_reserve_backup_copy true;
8/24/2019 1:14:41 AM: Operation: Backup
8/24/2019 3:19:39 AM: Error 0xb042f: Destination is unavailable.
8/24/2019 3:19:39 AM: Error 0x13c0005: Operation has completed with errors.

Start: 8/24/2019 1:14:40 AM
Stop: 8/24/2019 3:19:39 AM
Total Time: 02:04:59

I'm going to run a memtest on the machine today and see if it turns up any errors.

 

Best,

Patrick

Beginner
Posts: 3
Comments: 21

#3

I ran the standard Windows Memory Diagnostics which ran for about an hour.

Here's the result:

The Windows Memory Diagnostic tested the computer's memory and detected no errors

I'd rather not start pulling my memory modules out as they are difficult to get to on my system.

Thank you,

Patrick

Legend
Posts: 81
Comments: 17519

#4

Patrick, what extra information does the MVP Log Viewer show if you switch from the Short to the Regular Log View?

Forum Hero
Posts: 68
Comments: 8096

#5

I have been wondering how ATI is handling backups. In 2019 and earlier, if you have Windows file explorer open while the backup is running, you can refresh the view and see the .tib file grow in size. I've monitored this in 2020 and do not see the file growing the same way. If it is being built in memory before being committed to disk, I can see this being a huge issue. I've got 32GB of memory and only had to backup about a maximum of 100GB in all my tests and haven't had this issue, but could definitely see a problem for larger backups if this is true. Please open a support case for now and keep us posted!

Beginner
Posts: 3
Comments: 21

#6
Steve Smith wrote:

Patrick, what extra information does the MVP Log Viewer show if you switch from the Short to the Regular Log View?

 8/24/2019 11:29:29 AM: -04:00 31656 I00000000: -----
8/24/2019 11:29:29 AM: -04:00 31656 I00000000: ATI Demon started. Version: 24.3.1.20600.
8/24/2019 11:29:29 AM: -04:00 31656 I00640000: Backup reserve copy attributes: format tib; need_reserve_backup_copy true;
8/24/2019 11:29:29 AM: -04:00 31656 I00640002: Operation WDC WD3003FZEX-00Z4SA0 01.01A01 started manually.
8/24/2019 11:29:30 AM: -04:00 31656 I00640000: Backup reserve copy attributes: format tib; need_reserve_backup_copy true;
8/24/2019 11:29:30 AM: -04:00 31656 I013C0000: Operation: Backup
8/24/2019 1:16:22 PM: -04:00 31656 E000B042F: Error 0xb042f: Destination is unavailable.
| trace level: error
| line: 0x5d5406763c32a94
| file: c:\bs_hudson\workspace\1088\products\imager\archive\impl\operations\utils.cpp:582
| function: TrueImage::Archive::MakeDestinationUnavailableError
| line: 0x5d5406763c32a94, c:\bs_hudson\workspace\1088\products\imager\archive\impl\operations\utils.cpp:582, TrueImage::Archive::MakeDestinationUnavailableError
| Path: G:\
| StrId: \local\hd_ev\vol_guid(9031ED1A11E9C5BCEC686892C33281C5)
| $module: ti_demon_vs_20600
|
| error 0x2160015: A backup error.
| line: 0xa340ffd3416335cf
| file: d:\bs_hudson\workspace\mod-disk-backup\470\product\core\da_api\backup.cpp:353
| function: da_backup::Commit
| line: 0xa340ffd3416335cf, d:\bs_hudson\workspace\mod-disk-backup\470\product\core\da_api\backup.cpp:353, da_backup::Commit
| $module: disk_backup_vs_470
|
| error 0x29b138d: Input/output error
| line: 0x30ba355f9fd4ffbd
| file: d:\bs_hudson\workspace\mod-disk-backup\470\product\core\resizer\archive3\utils.cpp:364
| function: `anonymous-namespace'::ArchiveWriterImpl::CoroutineFunc
| line: 0x30ba355f9fd4ffbd, d:\bs_hudson\workspace\mod-disk-backup\470\product\core\resizer\archive3\utils.cpp:364, `anonymous-namespace'::ArchiveWriterImpl::CoroutineFunc
| function: archive_stream_write_shbuf
| path: \\?\G:\/WDC WD3003FZEX-00Z4SA0 01.01A01.tibx
| $module: disk_backup_vs_470
|
| error 0xfff0: Invalid access to memory location.
|
| line: 0x30ba355f9fd4ffbd
| file: d:\bs_hudson\workspace\mod-disk-backup\470\product\core\resizer\archive3\utils.cpp:364
| function: `anonymous-namespace'::ArchiveWriterImpl::CoroutineFunc
| line: 0x30ba355f9fd4ffbd, d:\bs_hudson\workspace\mod-disk-backup\470\product\core\resizer\archive3\utils.cpp:364, `anonymous-namespace'::ArchiveWriterImpl::CoroutineFunc
| function: pcs_co_file_writev
| path: \\?\G:\WDC WD3003FZEX-00Z4SA0 01.01A01-0001.tibx
| code: 0x800703e6
| $module: disk_backup_vs_470
8/24/2019 1:16:22 PM: -04:00 31656 E013C0005: Error 0x13c0005: Operation has completed with errors.
| trace level: error
| line: 0x9f2c53c72e8bced6
| file: c:\bs_hudson\workspace\1088\products\imager\demon\main.cpp:736
| function: main
| line: 0x9f2c53c72e8bced6, c:\bs_hudson\workspace\1088\products\imager\demon\main.cpp:736, main
| $module: ti_demon_vs_20600

Start: 8/24/2019 11:29:29 AM
Stop: 8/24/2019 1:16:22 PM
Total Time: 01:46:53

Beginner
Posts: 3
Comments: 21

#7

I'm going to attach as a .log file since the copy/paste doesn't look as nice.

Thanks,

Patrick

Attachment Size
509220-171234.log 2.84 KB
Beginner
Posts: 3
Comments: 21

#8
Bobbo_3C0X1 wrote:

I have been wondering how ATI is handling backups. In 2019 and earlier, if you have Windows file explorer open while the backup is running, you can refresh the view and see the .tib file grow in size. I've monitored this in 2020 and do not see the file growing the same way. If it is being built in memory before being committed to disk, I can see this being a huge issue. I've got 32GB of memory and only had to backup about a maximum of 100GB in all my tests and haven't had this issue, but could definitely see a problem for larger backups if this is true. Please open a support case for now and keep us posted!

 Hi guys,

I just submitted a report case. I included the detailed log file that I've included in this thread. I was not able to include my System Report log file though, the site said the file was too large.

I'm linking it here:

https://patrickfixedit.com/Acronis/

Best,

Patrick

Beginner
Posts: 3
Comments: 21

#9

https://patrickfixedit.com/Acronis/

System report and log file are posted in the above dir.

I tried to upload them in my support ticket but web site said files too large.

Thanks!

Patrick

Forum Star
Posts: 130
Comments: 2911

#10

Quick look at the log shows 2 (or more) entries where there is a destination unavailable message. The problem is, we do not know why the destination has become unavailable. So no progress there.

Looking at the log I get the impression that the destination disk is an internal one. I have a WD Blue 4TB that keeps on disappearing while copying files to it. The disk itself is fine - I have done numerous tests. I suspect there is problem with either the SATA port on the motherboard, the SATA cable or the Molex to SATA power cable. So you could try changing the SATA port, using different cables and see if any combination thereof remedies the problem.

Ian

Legend
Posts: 81
Comments: 17519

#11

8/24/2019 1:16:22 PM: -04:00 31656 E000B042F: Error 0xb042f: Destination is unavailable.
| error 0x2160015: A backup error.
| function: da_backup::Commit
| error 0x29b138d: Input/output error
| function: archive_stream_write_shbuf
| error 0xfff0: Invalid access to memory location.
| function: pcs_co_file_writev
| path: \\?\G:\WDC WD3003FZEX-00Z4SA0 01.01A01-0001.tibx
8/24/2019 1:16:22 PM: -04:00 31656 E013C0005: Error 0x13c0005: Operation has completed with errors.

Looking at the log detail reinforces that there is potentially a memory issue at work here where ATI is trying to commit data held in a shadow buffer (memory) area and is hitting a problem doing so.

As this is a new .tibx format task, there may be further information in the backup_worker log that covers the same time period as above. 
The log is found at C:ProgramData\Acronis\TrueImageHome\Logs\backup_worker and have the name such as backup_worker_2019_08_24-01-16-22.0.log

If you want to post the log information, please zip the original log file and attach the zip file to this topic (so that the original file name is retained).

Beginner
Posts: 3
Comments: 21

#12

Thank you for looking into this. I've attached the log file you requested, zipped.

Best,

Patrick

Attachment Size
509316-171255.zip 4 KB
Legend
Posts: 81
Comments: 17519

#13

Patrick, thanks for the backup_worker log information - this covers a different time period to your other logs posted in this topic but still shows the same error along with other errors (see below).

08/24/2019 16:29:30  >>>  id=10001   
action=browse   
agent="Acronis True Image 2020 24.3.1.20600 Win"   
archive="G:\\WDC WD3003FZEX-00Z4SA0 01.01A01.tibx"   
 
08/24/2019 16:29:30  type=log; level=inf;  message=  
ar#1: opening archive path="\\?\G:\/WDC WD3003FZEX-00Z4SA0 01.01A01.tibx" in readonly mode;  
 
08/24/2019 16:29:30  type=log; level=err;  message=  
io: failed to open '\\?\G:\WDC WD3003FZEX-00Z4SA0 01.01A01.tibx' (win_err=-2);  
 
08/24/2019 16:29:30  type=log; level=err;  message=  
io#1: failed to open "\\?\G:\/WDC WD3003FZEX-00Z4SA0 01.01A01.tibx" (pcs_err=-8);  
 
08/24/2019 16:29:30  type=log; level=err;  message=  
ar#1: failed to open archive path="\\?\G:\/WDC WD3003FZEX-00Z4SA0 01.01A01.tibx" mode=readonly uuid=00000000000000000000000000000000, err=-5022 (File not found);  
 
08/24/2019 16:29:30  type=log; level=err;  message=  
unable to open archive file (err -5022); 

08/24/2019 16:29:30  type=retcode; value=5022; id=10001;   
08/24/2019 16:29:30  >>>  id=1   
action=backup   
action=cleanup   
action=metainfo   
disk-backup   
agent="Acronis True Image 2020 24.3.1.20600 Win"   
archive="G:\\WDC WD3003FZEX-00Z4SA0 01.01A01.tibx"   
08/24/2019 16:29:30  type=log; level=inf;  message=  
ar#1: opening archive path="\\?\G:\/WDC WD3003FZEX-00Z4SA0 01.01A01.tibx" in append mode (create);  
 
08/24/2019 16:29:30  type=log; level=inf;  message=  
ar#1: opened archive path="\\?\G:\/WDC WD3003FZEX-00Z4SA0 01.01A01.tibx" mode=append uuid=54427a83645204b2dbf44b0ab00fba0c reopen=0:0;  

<< removed all the file exclusions information entries for better reading..! >>

08/24/2019 16:29:30  type=log; level=inf;  message=  
ar#1: archive_slice_start_chain_ex(type=full);  
 
08/24/2019 16:29:30  type=log; level=inf;  message=  
ar#1: archive_slice_create(sid=1,type=full,user_type=full,uuid=00000000000000000000000000000000,ctime=0);  
 
08/24/2019 16:29:30  type=log; level=inf;  message=  
ar#1: archive_slice_create(sid=2,type=inc,user_type=full,uuid=00000000000000000000000000000000,ctime=0);  
 
08/24/2019 17:27:40  type=log; level=inf;  message=  
io#1: start new vol 1 at 0x7d00000000;  
 
08/24/2019 18:16:19  type=log; level=wrn;  message=  
pcs_errno_to_err: Failed to convert errno 998 to pcs error. Return PCS_ERR_UNKNOWN value.;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
pcs_co_file_writev(\\?\G:\WDC WD3003FZEX-00Z4SA0 01.01A01-0001.tibx) failed: 998 (pcs_err=4095);  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
io#1: write 0x400000 err -5005, offs 0xe904c06000 in vol:1:0x6c04c06000;  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
io#1: write 0x400000 err -5005, offs 0xe905006000 in vol:1:0x6c05006000;  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
io#1: write 0x400000 err -5005, offs 0xe905406000 in vol:1:0x6c05406000;  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
io#1: write 0x400000 err -5005, offs 0xe905806000 in vol:1:0x6c05806000;  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
io#1: write 0x400000 err -5005, offs 0xe905c06000 in vol:1:0x6c05c06000;  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
io#1: write 0x400000 err -5005, offs 0xe906006000 in vol:1:0x6c06006000;  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
io#1: write 0x400000 err -5005, offs 0xe906406000 in vol:1:0x6c06406000;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
ar#1: archive closing;  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
io#1: write 0x91000 err -5005, offs 0xe906806000 in vol:1:0x6c06806000;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
io#1: total req: 238655 (rd: 0 wr: 238632 sync: 23), pgreq: 244344983 (rd: 0 wr: 244344983) sync: 4175.5 ms;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
io#1: pg cache hit=834122 ra_hit=0 ra_pages=0 ra=0.00%;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: dmap reads: 1081 dirs 106646 leaves 0 ra (0.00%);  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: segment_map reads: 51805 dirs 263882 leaves 0 ra (0.00%);  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: dedup_map reads: 67121 dirs 121074 leaves 0 ra (0.00%);  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: umap reads: 21 dirs 11233 leaves 0 ra (0.00%);  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
ar#1: umap allocations: 4080526 times, 0ms total, 0ms max, 0ms average;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
ar#1: commits: 24 times, 6068ms total, 352ms max, 252ms average;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
ar#1: wait stats: wr 6131682 rd 0 compr 590587 decompr 0 (ms);  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: dmap writes: 121725 pgs;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: dmap merges: 285 times, 11450ms total, 826ms max, 40ms average;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: segment_map writes: 223168 pgs;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: segment_map merges: 341 times, 13116ms total, 998ms max, 38ms average;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: dedup_map writes: 56869 pgs;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: dedup_map merges: 152 times, 8779ms total, 812ms max, 57ms average;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: umap writes: 1234 pgs;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
lsm#1: umap merges: 21 times, 212ms total, 19ms max, 10ms average;  
 
08/24/2019 18:16:19  type=log; level=inf;  message=  
ar#1: seg=998916.52 MB avg=269.60 KB user=1010325.64 MB ratio=97.47% pad=12106426463 (1.21%);  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
ar#1: archive close (commit_seq=22, reuse_seq=0, file_size=1000837050368, uuid=54427a83645204b2dbf44b0ab00fba0c) rc=-5005 (Input/output error);  
 
08/24/2019 18:16:19  type=log; level=err;  message=  
image backup: failed to close archive: 0x29b138d;  
 
08/24/2019 18:16:22  type=commonerror;  value=  
Base64 decode: ?5cA??@?A backup error.
$module Adisk_backup_vs_470
$file Ad:\bs_hudson\workspace\mod-disk-backup\470\product\core\da_api\backup.cpp
$func Ada_backup::Commit
$line Na       ?????_5?0Input/output error function Aarchive_stream_write_shbuf path W\\?\G:\/WDC WD3003FZEX-00Z4SA0 01.01A01.tibx
$module Adisk_backup_vs_470
$file Ad:\bs_hudson\workspace\mod-disk-backup\470\product\core\resizer\archive3\utils.cpp $func A`anonymous-namespace'::ArchiveWriterImpl::CoroutineFunc
$line Nl       ??  ???_5?0Invalid access to memory location.
 function Apcs_co_file_writev path W\\?\G:\WDC WD3003FZEX-00Z4SA0 01.01A01-0001.tibx code N??   
$module Adisk_backup_vs_470
$file Ad:\bs_hudson\workspace\mod-disk-backup\470\product\core\resizer\archive3\utils.cpp
$func A`anonymous-namespace'::ArchiveWriterImpl::CoroutineFunc
$line Nl         
 
08/24/2019 18:16:22  type=log; level=inf;  message=  
lsm#1: dedup_map nr_lookup=10167101 nr_found=79728 false+=79274 (0.78%/99.43%);  
 
08/24/2019 18:16:22  type=retcode; value=4095; id=1;   
08/24/2019 18:16:22  >>> exit   
08/24/2019 18:16:22  >>> exit  

The initial messages in the log shows that ATI 2020 first tries to open G:\\WDC WD3003FZEX-00Z4SA0 01.01A01.tibx in read-only mode to test that the archive is available / accessible and that this fails!

If I look at one of my own backup tasks, you can see the difference immediately:

08/23/2019 18:10:01  >>>  id=10001   
action=browse   
agent="Acronis True Image 2020 24.3.1.20600 Win"   
archive="E:\\AcronisBackup\\Test\\Win 10 SSD Diff E.tibx"   
 
08/23/2019 18:10:01  type=log; level=inf;  message=  
ar#1: opening archive path="\\?\E:\AcronisBackup\Test\/Win 10 SSD Diff E.tibx" in readonly mode;  
 
08/23/2019 18:10:01  type=log; level=inf;  message=  
ar#1: opened archive path="\\?\E:\AcronisBackup\Test\/Win 10 SSD Diff E.tibx" mode=readonly uuid=278da919b08e89ed6e94eac48aba5634 reopen=0:0;  
 
08/23/2019 18:10:01  >>>    
08/23/2019 18:10:01  >>>    
08/23/2019 18:10:01  type=log; level=inf;  message=  
ar#1: archive closing;  
 
08/23/2019 18:10:01  type=log; level=inf;  message=  
io#1: total req: 0 (rd: 0 wr: 0 sync: 0), pgreq: 0 (rd: 0 wr: 0) sync: 0.0 ms;  
 
08/23/2019 18:10:01  type=log; level=inf;  message=  
io#1: pg cache hit=3 ra_hit=0 ra_pages=0 ra=0.00%;  
 
08/23/2019 18:10:01  type=log; level=inf;  message=  
ar#1: archive close (commit_seq=88, reuse_seq=88, file_size=225710960640, uuid=278da919b08e89ed6e94eac48aba5634);  
 
08/23/2019 18:10:01  type=retcode; value=0; id=10001;   
08/23/2019 18:10:01  >>>  id=1   
action=backup   
action=cleanup   
action=metainfo   
disk-backup   
agent="Acronis True Image 2020 24.3.1.20600 Win"   
archive="E:\\AcronisBackup\\Test\\Win 10 SSD Diff E.tibx"   
08/23/2019 18:10:01  type=log; level=inf;  message=  
ar#1: opening archive path="\\?\E:\AcronisBackup\Test\/Win 10 SSD Diff E.tibx" in append mode (create); 

My backup .tibx was found and able to be opened in read-only mode, allowing the backup to proceed normally.

Your log shows:

08/24/2019 16:29:30  type=log; level=err;  message=  
ar#1: failed to open archive path="\\?\G:\/WDC WD3003FZEX-00Z4SA0 01.01A01.tibx" mode=readonly uuid=00000000000000000000000000000000, err=-5022 (File not found);  

It then goes on to try to create the file and fails later with the further error messages where data could not be written to your G: drive location for the .tibx file to be created.

I would check that you can create files in your G: drive backup location plus run a CHKDSK G: /F for that drive and if you have the time, I would suggest also running CHKDSK G: /R to check for any bad sectors.

Beginner
Posts: 3
Comments: 21

#14

Hello Steve,

I ran a chkdsk /f on the G drive, no errors found. Hours ago I started the chkdsk /r and it estimates it has about 10 hours to go. I will report back in the morning.

I did want to point out that I've tried at least 2 external drives and one NAS location. All fail around the 900GB mark. This leads me to think it's not the physical hard drives. Windows memory tests came up clean as well so i doubt it is my RAM. The only thing that has changed is I've upgraded to Acronis 2020 from Acronis 2019 and was backing up large amounts of data with zero problems/failure before the upgrade.

Any thoughts welcome.

Thanks again,

Patrick

Legend
Posts: 81
Comments: 17519

#15

Patrick, another user, Dean, has reported the same issue in the post here.

That topic is not the same as this issue but there is a correlation with the size of the backup files involved for Dean.

I would recommend opening a Support Case for this with Acronis.  Sorry, saw that you have already done this!

Beginner
Posts: 0
Comments: 19

#16

Yes, I can confirm the same issue is happening here.  I have 32GB of RAM, and have tested it extensively.  The RAM is certainly not an issue as I run a few VMs that also use that RAM without issues.  I'm also on Windows 10 Pro build 1903 (Spring update 2019).  I know they changed the behavior of antimalware and AV scanning in 1903, and maybe these captures are screwing with ATI 2020?  I can stop my protection services and see if I can get past this error.

I've also tried 2 different NAS locations and 1 USB 3 location to write backups. All fail around the same mark.

Forum Hero
Posts: 68
Comments: 8096

#17

For these large backup failures in 2020, has anyone left task manager open and monitored memory utilization around the time failure is generally happening? It seems like ATI 2020 is caching to raw memory and it's running out, or it's caching to a temp file on disk (somewhere hidden as I haven't found it yet while it's being created) and running out of space.

In addition to watching task manager, have you tried splitting the backup file into smaller chunks... Say, instead of a single large .tibx file, setting the max size to 500Gb and see if it still gets the same error or not?

Beginner
Posts: 3
Comments: 21

#18

I have tried splitting mine @ 500GB. Still fails with the same error.

Legend
Posts: 81
Comments: 17519

#19

I have been monitoring RAM usage while running full backups on my own computer and am not seeing any evidence of high memory usage at any time during the backup.  I cannot emulate a backup of the size involved for this topic but have tested with a system backup of over 100GB to my external USB 3 drive and saw only around a maximum RAM usage of 4-5% of the 8GB installed, so unless ATI changes behaviour with these very large backup data sizes, this remains a puzzle.

Forum Member
Posts: 7
Comments: 19

#20

I experienced the same issue as patrickfixedit when doing a full drive backup of 2TB of data to a 6TB drive in a USB3 enclosure. The backup would get to 700GB +/- and fail with an error saying that the destination drive was no longer available. The first couple of times it happened, the backup failed overnight so I wasn't at the computer to see what happened. But W10 Event Viewer did not show a loss in connection for the destination drive. The third time it happened, I was at the computer viewing a file on the destination drive. So I know the drive did not disconnect during the backup.

I rolled back to ATI 2019 and completed a full backup of the drive without issue. Now that I have a backup I'll set it aside and reinstall ATI 2020 to provide whatever help I can in troubleshooting this issue.

Forum Hero
Posts: 68
Comments: 8096

#21

I've been searching Google and haven't found anything definitive, just yet... but it looks like it could be a limitation of 32-bit (possibly with the new file format) from bad code in the application...

https://blogs.msdn.microsoft.com/cie/2013/10/31/compute-emulator-invalid-access-to-memory-location/

"As you can see in the dumpbin output this x86 application can handle addresses larger than 2 gigabytes when running on a 64-bit version of the operating system. Since most of us are now running 64-bit OS versions, when Visual Studio runs out of help heap addresses in the lower 2 GB it will return a 32-bit addresses that normally would belong to the kernel like A3BA2E38. You know this by looking at the top bit of the address A which is greater than 7.

A bug in the kernel address-conversion routines incorrectly sign-extended the 32-bit address to a 64-bit address. The incorrectly sign-extended address looked like this: 0xffffffff`a3ba2e38 instead of 0x00000000`a3ba2e38. The sign-extended address is reserved for the kernel so it is not part of the user-mode address pool. Correcting the address-conversion routines eliminates the bad address and everything works as expected."

https://social.technet.microsoft.com/Forums/lync/en-US/31da36e8-7fae-4565-9c19-9c5ae6f6c7eb/windows10-invalid-access-to-memory-location?forum=win10itprogeneral

I've read similar posts about this specific error in the Malwarebytes Antimalware (a tool I use) forum too, but seems like they only respond to the users via PM when it is discussed. It doesn't look completely unique to Acronis, but could be related to a change that was made in 2020 in conjunction...

https://forums.malwarebytes.com/topic/217541-error-invalid-access-to-memory-location/

 

Legend
Posts: 81
Comments: 17519

#22

Rob, that does sound like a probable scenario for this issue. 

Frequent Poster
Posts: 141
Comments: 838

#23

I guess we should have tested a large backup during the Beta test.  (My largest was about 2GB and that was using FTP so no .tibx.)  This, at least, should be easily reproduced by Acronis support if is really as described in this thread.  This sounds like a very major problem since limiting the backup file size doesn't circumvent it.

Forum Star
Posts: 130
Comments: 2911

#24

I tested largish backups, to Cloud, local Disk and my NAS. This was system disk on one of my PCs which had backed-up content over 100 gig.

It could go back to problems in porting the 64 bit code from Backup 12.5 in combination with the issue identified by Rob.

Ian

Beginner
Posts: 0
Comments: 19

#25
IanL-S wrote:

I tested largish backups, to Cloud, local Disk and my NAS. This was system disk on one of my PCs which had backed-up content over 100 gig.

I don't feel that 100GB backups are large backups.  It seems the issue seems to start around 700GB and up from there.  My latest attempt on the USB3 drive today after the chkdsks were run, stopped around 718GB.  One tibx file on my NAS is 780GB, and another is 727GB.  I'm trying to get about 1.7TB backed up.  Since I upgraded from 2017 to 2020, is there any way that I can downgrade to 2019?  I'm getting kind of nervous not having good backups.  I'm in IT, and we consider not having backups almost as disastrous as having a disaster where you lose disks.  I've lost enough in the past that daily backups are my norm.

Forum Hero
Posts: 68
Comments: 8096

#26

Dean, I'm not sure what Acronis can do in the interim.  I'd reach out to forum moderator Ekaterina directly via PM to see if there is something they can do in the interim until this is sorted out. In the meantime, they'll likely want you to run a system report and send up with the internal 2020 application feedback so they can review the logs and look into the issue while you still have it loaded up.

Frequent Poster
Posts: 141
Comments: 838

#27

If the drives you are trying to back up are not bootable system drives, is there any way you can get everything backed up using "Files and Folders" backups?  I know this is a terrible work-around, but files and folders backups still use the old .tib format.  Also, it would allow you to subdivide the backup into smaller chunks.  (It appears that telling ATI to divide the output files does not circumvent the problem.)

This would allow you to get your backups while Acronis support works on a solution.

In any case, you should follow Bobbo's advice and contact Ekaterina.  I'm certain that she, along with all of Acronis,  would want you to have a working solution to this. 

Beginner
Posts: 2
Comments: 17

#28

Same thing here... after 2020 upgrade backup started to fail becouse destination is unavailable.

Forum Member
Posts: 3
Comments: 36

#29

I am having the same issue since i upgraded.  I since used the acronis cleanup tool and went back to 2019 and i am having no issues with 2019 just 2020.

Beginner
Posts: 2
Comments: 17

#30

I uninstalled 2020, run cleanup tool once and installed 2019 back and backup is working again. So there is a major bug in 2020 which prevents backup working when there is over one tera to backup (i have about 2 teras of data)

Forum Member
Posts: 3
Comments: 36

#31

mine stopped after 600gb

Beginner
Posts: 0
Comments: 19

#32

I am splitting my backups to get around the issue.  System drive in 1 backup, and file/folder backups for the rest, which is horrible.  I'll be testing Macrium Reflect as well, but before I purchase that, will hold out hope that Acronis can fix this nasty bug in quick fashion.  Not being able to backup larger systems seems like a *very* bad bug for a backup product.

Forum Hero
Posts: 68
Comments: 8096

#33

Dean,

I've sent an email up the chain to Acronis personnel and have asked that they take a look at this forum thread.  Hopefully, they will respond here or get back to you and Nick via PM as soon as possible.

 

Beginner
Posts: 0
Comments: 1

#34

Same issue for me. I've tried different USB3.0 ports and different external USB-drives which resulted always in the same error message. Then I reverted back to ATI2019 which still works without issues on the same hardware.

My backup size is 1,2TB and fails at about 600GB.

Forum Moderator
Posts: 121
Comments: 4612

#35

Hello Everyone,

our development team is investigating why the issue occurs on some environments and would be grateful for an opportunity to conduct the remote sessions on the affected machines. If you have some time for investigation, please open a support ticket (how to submit a support ticket) and share the case ID with me. Thank you!

Beginner
Posts: 3
Comments: 21

#36
Ekaterina wrote:

Hello Everyone,

our development team is investigating why the issue occurs on some environments and would be grateful for an opportunity to conduct the remote sessions on the affected machines. If you have some time for investigation, please open a support ticket (how to submit a support ticket) and share the case ID with me. Thank you!

 Hello Ekaterina,

I submitted a support ticket a couple of days ago and got a response about an hour ago. I am assuming one of these numbers in the subject of the email that Lucy sent to me is the case ID:

[04125560] Backup fails at around 900GB on large backups    [ ref:_00D30Zcb._5001T1GNevg:ref ]

I have uninstalled Acronis True Image 2020 and installed 2019, made a full set of backups. Now that I have a full set of backups I am more than happy to work remotely with the engineers to see if we can resolve this issue. I can re-install 2020 and I have free time today.

Best,

Patrick

Beginner
Posts: 0
Comments: 1

#37

I am a new user (unable to roll back to 2019). I am having the same issue. 1.2 TB backup fails at 600GB.

My logs are virtually identical and the job fails at this line:

error 0xfff0: Invalid access to memory location.

 

Beginner
Posts: 0
Comments: 3

#38

I spoke with support and they acknowledged it is a known issue. They said a file and folder backup would work as a workaround while they patch, but I have not tried this yet. They resisted in crediting the paid tech call, but after arguying, they agreed to credit. We will see...

Forum Star
Posts: 45
Comments: 1476

#39

I can confirm that a backup of an internal data disk to a USB3 drive from Windows failed at 961 GB. I then tried the same backup from WinRE recovery media. The backup was successful at 1.31 TB. Using WinPE/RE recovery media should be a viable workaround until Acronis releases a fix.

Forum Member
Posts: 3
Comments: 36

#40
Mustang wrote:

I can confirm that a backup of an internal data disk to a USB3 drive from Windows failed at 961 GB. I then tried the same backup from WinRE recovery media. The backup was successful at 1.31 TB. Using WinPE/RE recovery media should be a viable workaround until Acronis releases a fix.

Or reinstall 2019.  For me, I do daily backups while I sleep so to restart the computer to a USB would not be feasible.

Forum Star
Posts: 130
Comments: 2911

#41

If I am reading it correctly, this thread indicates that the problem was not present in the beta version of ATI.

Ian

Forum Member
Posts: 3
Comments: 36

#42
IanL-S wrote:

If I am reading it correctly, this thread indicates that the problem was not present in the beta version of ATI.

Ian

Correct.  The beta had its issues but it at least did backups. 

Forum Star
Posts: 130
Comments: 2911

#43

Nick, thanks for the confirmation

Ian

Beginner
Posts: 0
Comments: 2

#44

Having same problem with large (2+ TB) backups. Will try "files and folders" as a temp workaround until we have a permanent solution. 

 

Mark

Beginner
Posts: 0
Comments: 3

#45

Finished a "files and folders" 2TB backup yesterday and I can confirm it does work without errors. This will have to do until a solution is found.

Legend
Posts: 81
Comments: 17519

#46

Just for clarification here: 

Files & Folders backup use the older .tib file architecture. 

This particular issue with Large backups (>600GB) is with Disk & Partitions backups using the new .tibx file architecture.

A work-around here is to split backups into smaller sizes where possible, and use Files & Folders for backups where there is no OS involvement for bootability etc.

Please see the post by Ekaterina earlier in this thread (copied below):

Hello Everyone,

our development team is investigating why the issue occurs on some environments and would be grateful for an opportunity to conduct the remote sessions on the affected machines. If you have some time for investigation, please open a support ticket (how to submit a support ticket) and share the case ID with me. Thank you!

Beginner
Posts: 0
Comments: 1

#47

Can confirm for another ATI 2020 user that didn't have the problem with 2019.

Disk/Partition backup > 700GB to an external USB3 drive

 

Attachment Size
509816-171394.log 2.83 KB
Beginner
Posts: 0
Comments: 3

#48

Just a note, I was able to perform a "files and folder" backup without splitting. And yes, the issue is with the .tibx architecture because even splitting and using smaller file sizes fails if the partition selected for back-up is greater than 600GB. It failed on me at around 800GB each time.

 

Frequent Poster
Posts: 141
Comments: 838

#49
Nick Winebarger wrote:
IanL-S wrote:

If I am reading it correctly, this thread indicates that the problem was not present in the beta version of ATI.

Ian

Correct.  The beta had its issues but it at least did backups. 

A minor clarification: The General Availability version of ATI 2020 also does most backups.  It's these just these very large backups that are failing.  If anybody has confirmed doing one of these large backups in the Beta test, I missed it.

But it would actually be good to hear that the large backups worked in the Beta test.  That would imply a bug introduced in the most recent build rather than a fundamental flaw in .tibx design.

Legend
Posts: 81
Comments: 17519

#50

All of my testing during the ATI 2020 Beta was done with systems where the main drive / partitions were all under 1TB is total size but where the actual backup size when well below half of that size!

My current system has a 128GB NVMe M.2 SSD for the OS & applications with a separate 1TB HDD where I have separate partitions for storing my larger user data content, but none of which is large enough to trigger this error situation.