Digital forgery estimation into DCT domain

6
Digital Forgery Estimation into DCT Domain - A Critical Analysis Sebastiano Battiato University of Catania Dipartimento di Matematica e Informatica Viale Andrea Doria 6 Catania, Italy [email protected] Giuseppe Messina University of Catania Dipartimento di Matematica e Informatica Viale Andrea Doria 6 Catania, Italy [email protected] ABSTRACT One of the key characteristics of digital images with a dis- crete representation is its pliability to manipulation. Re- cent trends in the field of unsupervised detection of digital forgery includes several advanced strategies devoted to re- veal anomalies just considering several aspects of multime- dia content. One of the promising approach, among others, considers the possibility to exploit the statistical distribu- tion of DCT coefficients in order to reveal the irregularities due to the presence of a superimposed signal over the origi- nal one (e.g., copy and paste). As recently proved the ratio between the quantization tables used to compress the signal before and after the malicious forgery alter the histograms of the DCT coefficients especially for some basis that are close in terms of frequency content. In this work we analyze in more details the performances of existing approaches eval- uating their effectiveness by making use of different input datasets with respect to resolution size, compression ratio and just considering different kind of forgeries (e.g., pres- ence of duplicate regions or images composition). We also present possible post-processing techniques able to manipu- late the forged image just to reduce the performance of the current state-of-art solution. Finally we conclude the papers providing future improvements devoted to increase robust- ness and reliability of forgery detection into DCT domain. Categories and Subject Descriptors I.4.0 [Computing Methodologies]: Image Processing and Computer Vision—General General Terms Security Keywords Digital Forgery, DCT, Digital Forensics, Digital Tampering Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. MiFor’09, October 23, 2009, Beijing, China. Copyright 2009 ACM 978-1-60558-755-4/09/10 ...$10.00. 1. INTRODUCTION Forgery is a subjective word. An image can become a forgery based upon the context in which it is used. For in- stance, an image altered for fun by someone who has taken a bad photo, but has been altered to improve its appearance, cannot be considered a forgery even though it has been al- tered from its original capture. The other side of forgery are those who perpetuate a forgery to gain payment and prestige. To achieve such purpose, they create an image in which they dupe the recipient into believing the image is real. There are different kinds of criminal forgeries: Images created through computer graphics tools (like 3D rendering) which looks like real but are completely virtual. Creating an image by altering its content, is another method. Duping the recipient into believing that the objects in an image are something else from what they really are (e.g., only by altering chrominances values). The image itself is not altered, and if examined will be proven as so (see example in Fig.1 from [9]). Objects are removed or added, for example, a person can be added or removed. The easiest way is to cut an object from one image and insert it into another image. By manipulating the content of an image the message can drastically change its meaning. Altering images is not new - it has been around since the early days of photography. The concepts have moved into the digital world by virtue of digital cameras and the avail- ability of digital image editing software. The ease of use of photoshopping 1 , which does not require any special skills, makes image manipulation easy to achieve. A good intro- duction to digital image forgery is given by Baron [1]. The book provides an excellent overview of the topic and de- scribes some detailed examples. Furthermore, it illustrates methods and popular techniques in different contexts, from the historical forgeries until forensics aspects. The inverse 1 Adobe Photoshop is a popular tool that can digitally en- hance images. Images that have been modified using Pho- toshop or similar drawing tools (e.g., Gimp, Corel Paint, MS Paint, etc.) are described as being ”photoshopped” or ”shopped”. The quality of the shopped images depends on both the tool and the artist. 37

Transcript of Digital forgery estimation into DCT domain

Digital Forgery Estimationinto DCT Domain - A Critical Analysis

Sebastiano BattiatoUniversity of Catania

Dipartimento di Matematica e InformaticaViale Andrea Doria 6

Catania, [email protected]

Giuseppe MessinaUniversity of Catania

Dipartimento di Matematica e InformaticaViale Andrea Doria 6

Catania, [email protected]

ABSTRACTOne of the key characteristics of digital images with a dis-crete representation is its pliability to manipulation. Re-cent trends in the field of unsupervised detection of digitalforgery includes several advanced strategies devoted to re-veal anomalies just considering several aspects of multime-dia content. One of the promising approach, among others,considers the possibility to exploit the statistical distribu-tion of DCT coefficients in order to reveal the irregularitiesdue to the presence of a superimposed signal over the origi-nal one (e.g., copy and paste). As recently proved the ratiobetween the quantization tables used to compress the signalbefore and after the malicious forgery alter the histograms ofthe DCT coefficients especially for some basis that are closein terms of frequency content. In this work we analyze inmore details the performances of existing approaches eval-uating their effectiveness by making use of different inputdatasets with respect to resolution size, compression ratioand just considering different kind of forgeries (e.g., pres-ence of duplicate regions or images composition). We alsopresent possible post-processing techniques able to manipu-late the forged image just to reduce the performance of thecurrent state-of-art solution. Finally we conclude the papersproviding future improvements devoted to increase robust-ness and reliability of forgery detection into DCT domain.

Categories and Subject DescriptorsI.4.0 [Computing Methodologies]: Image Processing andComputer Vision—General

General TermsSecurity

KeywordsDigital Forgery, DCT, Digital Forensics, Digital Tampering

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.MiFor’09, October 23, 2009, Beijing, China.Copyright 2009 ACM 978-1-60558-755-4/09/10 ...$10.00.

1. INTRODUCTIONForgery is a subjective word. An image can become a

forgery based upon the context in which it is used. For in-stance, an image altered for fun by someone who has taken abad photo, but has been altered to improve its appearance,cannot be considered a forgery even though it has been al-tered from its original capture. The other side of forgeryare those who perpetuate a forgery to gain payment andprestige. To achieve such purpose, they create an image inwhich they dupe the recipient into believing the image isreal. There are different kinds of criminal forgeries:

• Images created through computer graphics tools (like3D rendering) which looks like real but are completelyvirtual.

• Creating an image by altering its content, is anothermethod. Duping the recipient into believing that theobjects in an image are something else from what theyreally are (e.g., only by altering chrominances values).The image itself is not altered, and if examined will beproven as so (see example in Fig.1 from [9]).

• Objects are removed or added, for example, a personcan be added or removed. The easiest way is to cutan object from one image and insert it into anotherimage. By manipulating the content of an image themessage can drastically change its meaning.

Altering images is not new - it has been around since theearly days of photography. The concepts have moved intothe digital world by virtue of digital cameras and the avail-ability of digital image editing software. The ease of use ofphotoshopping1, which does not require any special skills,makes image manipulation easy to achieve. A good intro-duction to digital image forgery is given by Baron [1]. Thebook provides an excellent overview of the topic and de-scribes some detailed examples. Furthermore, it illustratesmethods and popular techniques in different contexts, fromthe historical forgeries until forensics aspects. The inverse

1Adobe Photoshop is a popular tool that can digitally en-hance images. Images that have been modified using Pho-toshop or similar drawing tools (e.g., Gimp, Corel Paint,MS Paint, etc.) are described as being ”photoshopped” or”shopped”. The quality of the shopped images depends onboth the tool and the artist.

37

Figure 1: After 58 tourists were killed in a terror-ist attack (1997) at Hatshepsut’s temple in LuxorEgypt, the Swiss tabloid Blick digitally altered apuddle of water (picture on the top) to appear asblood flowing from the temple (figure on the bot-tom).

problem of forgery detection is, on the other hand, a bigissue. Several techniques take into account multiple aspectof image properties to achieve such purpose [8]. One of thepromising approach, among others, considers the possibil-ity to exploit the statistical distribution of DCT coefficientsin order to reveal the irregularities due to the presence ofa superimposed signal over the original one (e.g., copy andpaste).

In this work we analyze in details the performances ofthree existing approaches evaluating their effectiveness bymaking use of different input datasets with respect to reso-lution size, compression ratio and just considering differentkind of forgeries. Starting from a Bayesian approach to iden-tify if a block is doctored or not, the authors of [13] use asupport vector machine to classify the two classes of blocksinto the forged JPEG image. In [19], the authors describea passive approach to detect digital forgeries by checkinginconsistencies of blocking artifact. Given a digital image,they find a blocking artifact measure based on an estimatedquantization table using the power spectrum of the DCT co-efficient histogram. The approach in [7] explicitly detects ifpart of an image was compressed at a lower quality than thesaved JPEG quality of the entire image. Such a region is de-tected by simply re-saving the image at a multitude of JPEGqualities and detecting spatially localized local minima inthe difference between the image and its JPEG compressedcounterpart.

All previous studies does not consider a sufficient numberof effective cases including for examples different image res-olution, compression size and forgery anomalies. Also theDCT basis to be evaluated for forgery detection should beselected in a proper way. In this paper we start to builda systematic way to analyze the problem of forgery detec-tion into DCT domain pointing out both strength and weakpoint of the existing solutions. Also some post processing

strategies can be devised to properly mask the DCT anoma-lies.

The rest of the paper is organized as follows. In section2 the DCT quantization is described. In section 3 a deepdescription of the aforementioned techniques is presented.The experimental part is described in section 4 taking intoconsideration a large database which aim is to stress the de-scribed techniques. Finally the conclusion are given drawingpossible improvements.

2. DCT QUANTIZATIONThe DCT codec-engines typically apply a quantization

step in the transform domain just considering non-overlappingblocks of the input data. Such quantization is usually achie-ved by a quantization table useful to differentiate the levelsof quantization adapting its behavior to each DCT basis.The JPEG standard has fixed the quantization tables andjust varying by a single multiplicative factor, these tables,different compression ratios (and of course different qualitylevels) are obtained [11, 15]. As the tables are included intothe image file, they are also customizable [2, 3, 5]. In such away the commercial codec solutions exploit proprietary ta-bles. One among other is the largely used Photoshop whichtake into consideration thirteen levels of quality (from 0 to12) and in the ”Save as Web” configuration hundred qualitylevels (from 1 to 100). For each basis the same quantiza-tion step qi (for i = {1, .., 63}) is applied over the entireset of non-overlapping blocks just producing a set of valuesthat can be simply clustered by their values (integers) thatconstitute a clear periodic signal with period equal to qi.

Once a first quantization has been performed a secondquantization will introduces periodic anomalies into the sig-nal (See Fig.2). The anomalies of such periodicity can beanalyzed to discover possible forgeries. Mathematically themeaning of the double quantization is:

Q1,2(u1,i) =

[[u1,i

q1,i

]q1,i

q2,i

](1)

where q1,i and q2,i are two different quantization factors andu1,i is the DCT coefficient at position i of the current block.

3. STATE OF THE ART ANALYSISIn the following we will describe the three aforementioned

techniques. In particular we will show typical example ofunrecognized forgeries cases, explaining the reason of suchfailures.

Quantization tables approximationThe approach of Ye et al. [19] consists of three main steps:collection of DCT statistics, analysis of statistics for quanti-zation tables estimation, and assessment of DCT blocks er-rors with respect to the estimated quantization tables. Theperformances of such technique are strictly related with theamount of forged blocks in comparison with the total num-ber of blocks. In other words the sensibility of the corre-sponding forgery detector is very high only at lower resolu-tion size; at higher resolution the related performances de-grades abruptly even in presence of an extended connectedregion (e.g., faces). Furthermore as the technique performs

38

30 20 10 0 10 20 30

(a)

30 20 10 0 10 20 30 30 20 10 0 10 20 30

(b) (c)

Figure 2: Double quantization artifacts: (a) thedistribution of singly quantized coefficients with q1= 5; (b) the distribution of these coefficients de-quantized; (c) the distribution of doubly quantizedcoefficients with q1 = 5 followed by q2 = 3 (note theempty bins in this distribution);

an estimation of the quantization tables to identify anoma-lies from the first quantization to the second one, the algo-rithm is very subject to the level of quality and number ofcompression.

As example of failure we show the forged image obtainedfrom uncompressed Kodak image [10] (”River” landscape)with overimpressed ”Woman”(from JPEG Standard uncom-pressed test set) previously resampled from original size to256 × 320. Before the ”copy and paste” the two images”Woman” and ”River” have been saved through photoshoprespectively with level 6 and 12. The final photoshoppedimage has been saved through all the possible photoshoplevels from 0 to 12. As result we show the two examplesin Fig.3: in the first row (Fig.3(a)) the forged image savedwith level 10 and the resulting error map (Fig.3(b)) whichenhance several errors on blocks with high frequencies andless errors in low frequencies blocks (the woman and thesky). The second example of Fig.3(d) shows the resultingerror map from the same image Fig.3(c) with a saved qualityfactor of 12 (maximum). In this final case the method wasunable to find any kind of forgery and furthermore the levelof estimated error is very low.

Bayesian Approach of Detecting Doctored BlocksThe four advantages possessed by the solution of He et al.[13], namely automatic doctored part determination, resis-tant to different kinds of forgery techniques in the doctoredpart, ability to work without full decompression, and fast de-tection speed, make the algorithm very attractive. But theinitial quantization factor q1 and the second one q2 must beknown in order to estimate the periodic function n(k) neededto estimate the periodicity of histograms of double quantizedsignals. Actually q2 can be dumped from the JPEG image.Unfortunately, q1 is lost after the first decompression and

hence has to be estimated. From our purpose, as the initialquantization factor is unknown, it can be only estimatedfrom the image under analysis. Also for this reason we havedesigned a reference dataset where the original input imagecan be compressed by using different quantization param-eters (i.e., different quantization tables at different qualitylevels).

The main issue of such approach is the correct estimationof n(k) for such purpose we show some examples of period-icity estimation. The authors correctly assert that if the pe-riodicity is n(k) = 1 it is impossible to estimate the forgery,otherwise if the periodicity is greater than 1 a forgery is likelyto be present into the image. In Fig.4 we have consideredthree typical cases: Fig.4(a) is a doctored image where allthe subjects into the scene have the same face, the image hasbeen compressed through a medium quality level; Fig.4(b)shows an image without any kind of forgeries; Fig.4(c) isthe same image as above with quality factor 7. As it isclearly visible the periodicity estimated Fig.4(d) for the im-age Fig.4(a) is uniformly equal to 1 for all the 63 coefficientsstatistics of the DCT (the DC has been excluded due to thenature of the histogram distribution). This means that theimage is not affected by forgeries. On the other hand theestimated periodicity of the image Fig.4(b) depicts the pres-ence of artifacts but the original images has been acquireddirectly from a Digital Camera and have not been altered.Finally we show the periodicity of the above mentioned testimage; in this case the periodicity Fig.4(d) has been alteredonly for the coefficient 19 of the DCT basis, furthermore asthere is only a small periodicity deviation the forgeries cannot be estimated.

Multiple-DifferencesFarid et al. [7] presents a technique based on multiple dif-ferences between original compressed JPEG and successivere-compressed version of the original image, with increasingcompression factors. The disadvantage of this approach isthat it is only effective when the tampered region is of lowerquality than the image into which it was inserted. The ad-vantage of this approach is that it is effective on low qualityimages and can detect relatively small regions that have beenaltered. In Fig.5 we show the error maps estimate from theoriginal doctored image ”Woman” and ”river”with compres-sion quality 11 and 12 respectively. The final doctored imagehas been saved with quality factor 12. The errors maps doesnot show significant ghost in any of the differences, in partic-ular the sky and the woman present similar errors level andcan not be estimated as clearly forged areas. The authorsassert that the forgery is only effective when the tamperedregion is of lower quality than the image, which is the caseof the example in Fig.5.

As shown in the three cases above, the reliability of thecurrent solution is not so robust with respect to various as-pects of image forgeries pipeline. A more sophisticated andextensive experimental assessment is needed for each forth-coming solution. In our opinion the relative ratio betweenq1 and q2 is fundamental to drive any existing forgery detec-tion strategy into DCT domain. Also the relative size of theforged patch with respect to input resolution size should betaken into account. Finally all the methods that try to esti-mate the original quantization table block by block shouldbe tuned just considering the content of each block in termsof local energy (e.g., textured vs. flat areas).

39

(a) (b)

(c) (d)

Figure 3: Error maps obtained through [19] of a doctored image saved with different quality factor: (a) Saveddoctored image with quality factor 10, (c) the same image saved with quality factor 12. (b) and (d) resultingblocks error maps.

(a) (b) (c)

0

1

2

M ultifaces

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63

0

1

2

3

4

IM G_8257

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63

0

1

2

3

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63

W om an_phtshp06 + Kodim 13_phtshp12 - Quality07

(d) (e) (f)

Figure 4: Periodicity Estimation through He et al. [13] method. (a) Original doctored image with lowcompression quality; (b) Original undoctored image; (c) Doctored image saved with photoshop quality 7.(d), (e) and (f) Periodicity estimation from the 63 DCT coefficients histograms.

40

Figure 5: Multiple differences maps from original ”woman+river” doctored image, in this case the womanwas compressed with quality 11 and the river with quality 12. The doctored image was save with quality 12.The maps are represented from bottom right to top left in a decreasing level quality factor.

Figure 6: Woman (JPEG Standard test image) andLena masks, used to generate forgeries.

4. DATASET AND DISCUSSIONTo assess the effectiveness of any forgery detection tech-

nique, a suitable dataset of examples should be used forevaluation. In this paper we are interested in providing areference dataset for the forgery detection to be used forperformance assessment just including various critical (andin some cases typical) condition. It is also important to con-sider into the dataset images without any kind of forgeries toavoid false positives. The initial dataset contains a numberof uncompressed images organized with respect to differ-ent resolutions, sizes and camera model. Also the standard

dataset from Kodak images [10] and from UCID v.2 [4] aretaken into consideration. At the moment we have extractedthe quantization tables used by Photoshop [12] that togetherwith the standard tables [18] can be used to generate variouscompression ratio. The reason is mainly based upon the factthat if someone has to introduce forgeries into an image hemust use some imaging softwares to achieve such intent. Inparticular we are designing a series of Photoshop scriptingthat can be used in an unsupervised way to generate realexamples. We are planning to add to the dataset a majornumber of quantization tables dataset as described in [6].Using manually generated masks, as described in Fig.6, itis possible to generate several kinds of forgeries, modifyingboth forgery image and compression factor, also in this casemaking use of the scripting mechanism provided by Pho-toshop (which is largely used in image forensics analysis).The dataset can be downloaded from our site [14] and iscomposed of original uncompressed images, masks for forg-eries generation and scripts to generate the final standardforgeries. All the case studies presented in the previous Sec-tion have been extracted from the dataset and have beenuseful to understand and evaluate the performances of theexisting approaches.

A series of possible post-processing techniques able to ma-nipulate the forged image just to reduce the performance ofthe current state-of-art solution will be taken into consider-ation in future developments. We plan to add to our datasetalso a series of possible image alterations including amongothers: cropping, flip, rotation, rotation-scale, sharpening,Gaussian filtering, random bending, linear transformations,

41

aspect ratio, scale changes, line removal, color reduction,etc. We hope to re-use in some sense, part of the avail-able software already adopted in the field of benchmarkingtool for still image watermarking algorithms (e.g., Stirmark,CheckMark,Optimark) [17, 16].

One possible strategy to improve robustness and reliabil-ity of forgery detection into DCT domain is related to thechoice of the effective DCT basis to be considered in the pro-cess. Preliminary results have reported an improvement ofabout 10% in terms of forgery detection just implementingsome heuristics devoted to filtering out low and high fre-quency coefficients just working only on mid-basis values.

5. ACKNOWLEDGMENTSThe authors want to thanks the Interpol Crime Against

Children Group of Lyon who have collaborate to redirectthis work in a much more detailed analysis of realistic prob-lems. Their contribution have enhanced the issues causedby theoretical approaches and have forced the direction ofour project to a much more robust and helpful application.

6. CONCLUSIONSThe forgery detection is not a trivial task. The DCT

domain, just considers specific peculiarities of the quantizedbasis. Unfortunately, the variability of the context especiallyin real cases includes many situations (e.g. resolution size,camera model, compression ratio, quantization parameters,number of forgeries, etc.) that are not fully exploited. In thispaper we have presented experimental evidence of weaknessand strength points of the current solutions. We have alsostarted to build a comprehensive dataset that can be usedfor a robust assessment and evaluation of any forthcomingtechnique in the field. A detailed experimental frameworktogether with a freely accessible dataset to be used for DCT-based forgery detection has been presented.

7. REFERENCES[1] C. Baron. Adobe Photoshop Forensics. Course

Technology PTR, 1st edition, 2007. ISBN 1598634054.

[2] S. Battiato, A. Capra, I. Guarneri, and M. Mancuso.DCT optimization for CFA data images. In M. M.Blouke, N. Sampat, and R. J. Motta, editors, Societyof Photo-Optical Instrumentation Engineers (SPIE)Conference Series, volume 5301 of Presented at theSociety of Photo-Optical Instrumentation Engineers(SPIE) Conference, pages 429–437, jun 2004.

[3] S. Battiato, M. Mancuso, A. Bosco, and M. Guarnera.Psychovisual and statistical optimization ofquantization tables for DCT compression engines. InImage Analysis and Processing, 2001. Proceedings.11th International Conference on, pages 602–606, Sep2001.

[4] D. Borghesani, C. Grana, and R. Cucchiara. Colorfeatures comparison for retrieval in personal photocollections. In ACS’08: Proceedings of the 8thconference on Applied computer scince, pages 265–268,Stevens Point, Wisconsin, USA, 2008. World Scientificand Engineering Academy and Society (WSEAS).

[5] A. Bruna, A. Capra, S. Battiato, and S. La Rosa.Advanced DCT rate control by single step analysis. In

Consumer Electronics, 2005. ICCE. 2005 Digest ofTechnical Papers. International Conference on, pages453–454, Jan. 2005.

[6] H. Farid. Digital image ballistics from JPEGquantization: A followup study. Technical ReportTR2008-638, Department of Computer Science,Dartmouth College, 2008.

[7] H. Farid. Exposing digital forgeries from JPEGghosts. IEEE Transactions on Information Forensicsand Security, 1(4):154–160, 2009.

[8] H. Farid. Image forgery detection: A survey. IEEESignal Processing Magazine, 2(26):16–25, 2009.

[9] H. Farid. Photo tampering throughout history. link:http://www.cs.dartmouth.edu/farid/research/digitaltampering/, 2009.

[10] R. Franzen. Kodak lossless true color image suite.Internet Link. http://r0k.us/ graphics/ kodak/.

[11] R. C. Gonzalez and R. E. Woods. Digital ImageProcessing (3rd Edition). Prentice-Hall, Inc., UpperSaddle River, NJ, USA, 2006.

[12] C. Hass. JPEG compression quality tables for digitalcameras and digital photography software. InternetLink. http://www.impulseadventure.com/photo/jpeg-quantization.html.

[13] J. He, Z. Lin, L. Wang, and X. Tang. DetectingDoctored JPEG Images via DCT Coefficient Analysis.In Computer Vision - ECCV 2006, 9th EuropeanConference on Computer Vision, Graz, Austria, May7-13, 2006, Proceedings, Part III, pages 423–435, 2006.

[14] IPLab. Image Processing Laboratory - ForensicDatabase. link: http://iplab.dmi.unict.it/index.php?option=com docman&Itemid=111.

[15] W. B. Pennebaker and J. L. Mitchell. JPEG StillImage Data Compression Standard. Kluwer AcademicPublishers, Norwell, MA, USA, 1992.

[16] F. A. P. Petitcolas. Watermarking schemes evaluation.IEEE Signal Processing, 17(5):58–64, September 2000.

[17] F. A. P. Petitcolas, R. J. Anderson, and M. G. Kuhn.Attacks on copyright marking systems. InD. Aucsmith, editor, Information Hiding, SecondInternational Workshop, IHS98, pages 219–239,Portland, Oregon, U.S.A., April 1998. Springer-Verlag.ISBN 3-540-65386-4.

[18] G. K. Wallace. The JPEG still picture compressionstandard. Commun. ACM, 34(4):30–44, 1991.

[19] S. Ye, Q. Sun, and E.-C. Chang. Detecting DigitalImage Forgeries by Measuring Inconsistencies ofBlocking Artifact. In Proceedings of the 2007 IEEEInternational Conference on Multimedia and Expo,ICME 2007, July 2-5, 2007, Beijing, China, pages12–15, 2007.

42