Create Your Own Cloud Photo Storage Site in BackBlaze B2 Using Cloudflare and ShareX - NETSEC


Learning, Sharing, Creating

Cybersecurity Memo

Saturday, January 9, 2021

Create Your Own Cloud Photo Storage Site in BackBlaze B2 Using Cloudflare and ShareX

I have been using Imgur to store my Blog photos for a long time before it was PostImage, PhotoBucket and ImageShack. After testing with BackBlaze B2 service, especially integrated with ShareX and connected to CDN service provider Cloudflare, I decided to give a BackBlaze a try as my photo site to store all my Blog screenshots and photos. The beauty of to integrate with Cloudflare is I can use my own domain name for url and it can utilize CDN's caching function to reduce the usage of Backblaze to avoid cap limit. 

The free plan at BackBlaze is 10G and Cloudflare can be configured to work with it very well. Also ShareX screen capture software can auto-generate my own domain url for those photos. It sounds quite promising to use it as blog photo bed. There is cap limit for class b and class c transactions. But usually the limit is good for a small website like mine. 

This post is a summary for all steps for installation and configuration. 

Backblaze has a 10GB free file limit, and then charges $0.005/GB/Month thereafter. It also offers a limited number of Class "B" and "C" transactions, with 2500 free of these per day. You probably won't use any Class C transactions, but Class B transactions are used for b2_download_file_by_name requests, which can be easily exceeded,  But with set up proper caching rules in Cloudflare, it can be avoided for additional API requests. This is mainly due to the Bandwidth Alliance, so bandwidth between Cloudflare and Backblaze is entirely free.

Usually Daily Class B Transactions Caps will be quickly reached if you are having not integrated with Cloudflare or other bandwidth Alliance vendor. 

Create and Configure Your BackBlaze Account

1  Sign Up an Account. Free service is enough for this lab.

2  Log into Your Account and Create a Public Bucket

Here is an extra important step to set up your bucket. You will have to put one cache command into Bucket info: {"cache-control":"max-age=43200"}

43200 is seconds. This means Cloudflare will not re-fetch the resource from source (BackBlaze) in 43200 seconds

You also can set it to 720000 to have longer cache time. 


3  Upload a file to get Friendly URL:

For example: I got following url starting with You will need this url to put it into ShareX and Cloudflare

Friendly URL::

test1-51sec is bucket name. 2020 is the folder I created in the bucket.

4  Add a New Application Key

The application key will only show once, and you will need it to access your bucket. But you can create multiple keys later. 

Important: make sure you created a key only for one bucket, not for all. 

Both KeyID and Application Key will be needed to allow ShareX to upload the photos into this bucket.

Configure ShareX 

1  Right Click ShareX icon, and select Destinations -> Destination Settings.. menu

2  Configure Backblaze B2 destination as show in the screenshot.
I put upload path parameter with year and month to organize the photos.
You will have to use custom URL with for now. Later, After we configure cloudflare to use our own domain for backblaze site, we can change this custome url. 

Configure Cloudflare : Using Page Rule


Ensure your SSL settings are set to Full. This is the default setting.  Please noteBackblaze B2 only supports secure (HTTPS) connections.

1  Create a new CNAME record for your Backblaze friendly url. 

2  Create a page rule to cache this URL only when it is from

Cache level 'standard' should be enough since it already caches most assets (images, css, js). With “cache everything” page rule, you are instructing Cloudflare to also cache HTML (pages) output. However, even with this rule enabled, unless you have an “Edge cache TTL” rule or “cache-control” headers (from your origin), html documents will not cache anyway. Likely this rule alone does little or nothing. Cache Everything also might be causing some other issues at third party, such as Ezoic. 

Example of Page Rule:

SSL set to Flexible might not to work if your site is using strict for SSL already. 

Another example I found online is:

Please do not take all settings as shown in above examples, since you will need to tune the settings based on your site configuration. 

3  Change ShareX settings to use this sub domain url which cached by Cloudflare.

Configure Cloudflare : Using Transform Rule

Cloudflare allows free user to use at most 10 active Transform Rules, which will save your total 3 page rules' limitation. 


Basically, to remove /file/bucketname from our 'pretty'/nicer URL, we'll use Cloudflare Transform Rules - rewrite URL feature.

For anything that matches this rule we want to automatically add the '/file/bucketname' part, so under the 'then...' bit of the rule choose the 'Rewrite to...' radio option and set the drop-down to 'Dynamic', and set the rule to the line:


This tells the rules engine to concatenate this with the trailing part of the request. We don't ever see this, it'll just happen in the background.

Before, we tested that the image at:
with cname, it could be accessed at:
with transform rule, we can go one step further and try to access the image without /file/bucketname: 

There are more we will need to do to remove B2 HTTP response headers, which you can find out from the post:

Verify if using CloudFlare Cache

Using browser Chrome F12 to check headers

Or using Curl command from linux 
  • curl --head
  • curl -svo /dev/null 

netsec@hpthin:~$ curl -svo /dev/null
*   Trying
* Connected to ( port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* TLSv1.3 (IN), TLS handshake, Server hello (2):
{ [122 bytes data]
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
{ [19 bytes data]
* TLSv1.3 (IN), TLS handshake, Certificate (11):
{ [2330 bytes data]
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
{ [79 bytes data]
* TLSv1.3 (IN), TLS handshake, Finished (20):
{ [52 bytes data]
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* TLSv1.3 (OUT), TLS handshake, Finished (20):
} [52 bytes data]
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=California; L=San Francisco; O=Cloudflare, Inc.;
*  start date: Jun  6 00:00:00 2022 GMT
*  expire date: Jun  5 23:59:59 2023 GMT
*  subjectAltName: host "" matched cert's "*"
*  issuer: C=US; O=Cloudflare, Inc.; CN=Cloudflare Inc ECC CA-3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
} [5 bytes data]
* Using Stream ID: 1 (easy handle 0x56103d2d9f00)
} [5 bytes data]
> GET /file/netsec/2023/01/chrome_GkBk1AoDaL.png HTTP/2
> Host:
> user-agent: curl/7.68.0
> accept: */*
{ [5 bytes data]
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
{ [238 bytes data]
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
{ [238 bytes data]
* old SSL session ID is stale, removing
{ [5 bytes data]
* Connection state changed (MAX_CONCURRENT_STREAMS == 256)!
} [5 bytes data]
< HTTP/2 200
< date: Mon, 30 Jan 2023 22:03:31 GMT
< content-type: image/png
< content-length: 73641
< x-bz-file-name: 2023/01/chrome_GkBk1AoDaL.png
< x-bz-file-id: 4_z1ffd0e4783a5d79a7cea0c15_f11011ae48f340b0c_d20230130_m171133_c004_v0402013_t0052_u01675098693590
< x-bz-content-sha1: 31ea82aa3a9d0dd9e496fd2ab2cba6131f865339
< x-bz-upload-timestamp: 1675098693590
< cache-control: max-age=31536000
< content-disposition: inline; filename=chrome_GkBk1AoDaL.png
< x-bz-info-src_last_modified_millis: 1675098694039
< last-modified: Mon, 30 Jan 2023 17:11:56 GMT
< cf-cache-status: HIT
< age: 17288
< accept-ranges: bytes
< server-timing: cf-q-config;dur=7.0000005507609e-06
< report-to: {"endpoints":[{"url":"https:\/\/\/report\/v3?s=OtQzo%2F7UZfScoGin1exNkZteB%2Fi4T1Max1llUHG8VAQo%2FU4dp2g8ZNkDrOxi%2Fa0%2B6yKRNdI0guoimWIB0Bt6G0iyvCnOdblAPZvWcYHXNdZas%2BH9AIk4P3lz1g%2B7Cg%3D%3D"}],"group":"cf-nel","max_age":604800}
< nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< server: cloudflare
< cf-ray: 791d7f82cba7380c-IAD
< alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400
{ [601 bytes data]
* Connection #0 to host left intact

'HIT' (which means you've accessed a cached file) or 'MISS' (it had to be retrieved from the origin server)

Create Hotlink Protection

You can enable it to prevent other site to use your images. But it might cause some problems if you would like to use it in other sites. 

Using Cloudflare Workers


Based on the discussion on, it seems using page rule / transform rule will be still violating 's 2.8 Limitation on Serving Non-HTML Content, But using Workers, it is fine. 

You also can use Workers to replace the page rules as show below:
Note: Please change your b2domain and bucket value based on your settings. 

'use strict';
const b2Domain = ''; // configure this as per instructions above
const b2Bucket = 'bucket-name'; // configure this as per instructions above
const b2UrlPath = `/file/${b2Bucket}/`;
addEventListener('fetch', event => {
        return event.respondWith(fileReq(event));
// define the file extensions we wish to add basic access control headers to
const corsFileTypes = ['png', 'jpg', 'gif', 'jpeg', 'webp'];
// backblaze returns some additional headers that are useful for debugging, but unnecessary in production. We can remove these to save some size
const removeHeaders = [
const expiration = 31536000; // override browser cache for images - 1 year
// define a function we can re-use to fix headers
const fixHeaders = function(url, status, headers){
        let newHdrs = new Headers(headers);
        // add basic cors headers for images
                newHdrs.set('Access-Control-Allow-Origin', '*');
        // override browser cache for files when 200
        if(status === 200){
                newHdrs.set('Cache-Control', "public, max-age=" + expiration);
                // only cache other things for 5 minutes
                newHdrs.set('Cache-Control', 'public, max-age=300');
        // set ETag for efficient caching where possible
        const ETag = newHdrs.get('x-bz-content-sha1') || newHdrs.get('x-bz-info-src_last_modified_millis') || newHdrs.get('x-bz-file-id');
                newHdrs.set('ETag', ETag);
        // remove unnecessary headers
        removeHeaders.forEach(header => {
        return newHdrs;
async function fileReq(event){
        const cache = caches.default; // Cloudflare edge caching
        const url = new URL(event.request.url);
        if( === b2Domain && !url.pathname.startsWith(b2UrlPath)){
                url.pathname = b2UrlPath + url.pathname;
        let response = await cache.match(url); // try to find match for this request in the edge cache
                // use cache found on Cloudflare edge. Set X-Worker-Cache header for helpful debug
                let newHdrs = fixHeaders(url, response.status, response.headers);
                newHdrs.set('X-Worker-Cache', "true");
                return new Response(response.body, {
                        status: response.status,
                        statusText: response.statusText,
                        headers: newHdrs
        // no cache, fetch image, apply Cloudflare lossless compression
        response = await fetch(url, {cf: {polish: "lossless"}});
        let newHdrs = fixHeaders(url, response.status, response.headers);
  if(response.status === 200){
    response = new Response(response.body, {
      status: response.status,
      statusText: response.statusText,
      headers: newHdrs
    response = new Response('File not found!', { status: 404 })
        event.waitUntil(cache.put(url, response.clone()));
        return response;

It will replace your original url 


If you are getting an error such as following:
ShareX - Error
Could not get B2 upload URL: Got status 400 (bad_request): required field bucketId cannot be null.

That is because you are allowing your key access to all buckets. Please create a new key to allow only one bucket. 

Some other solutions such as using COS and OSS, have been compared in this post:如何挑选一个好的图床来存储图片


No comments:

Post a Comment