How the hell can Sun take something complicated and make it overengineered and overly complicated to boot?
Well, if the functionality is ‘drag and drop’, the answer is apparently easy.
Now, in the ideal world, drag and drop would be handled as follows: your custom drag component would call a magic “is drag initiated” when a MouseListener.mousePressed() method is called, which would take control of the mouse click and automagically determine if drag and drop was initiated. If it is initiated, the magic routine would return true, giving you the opportunity to build your drag operation and call some magic ‘startDrag’ routine. If the magic routine returned false, then you know you should handle the mousePressed routine as usual.
Drop would be similarly easy: you would register and implement a ‘DropTargetListener’ interface, which would give you the opportunity to say “yes, I’ll accept that drop” and handle mouse events and the drop target.
A third class would implement the ‘Transferable’ interface which would represent the thing you’re dragging.
Is Java Swing’s drag and drop this easy? That is, do they do the hard work so that, as someone who wants to implement a custom JComponent which accepts drag and drop, it’s fairly easy?
Uh, no.
No, instead we wind up with this whole dazzling array of stupidity on the part of Sun which, after spending a few hours deciphering some excellent articles, eventually leads you to something that resembles drag and drop nerdvana. (*sigh*)
What’s wrong with Drag and Drop?
Okay, to start off with, apparently Java didn’t decide to create an insulating drag & drop layer for Swing above AWT. So we need to use AWT, which scatters drag and drop functionality across two packages: java.awt.dnd (not Dungeons and Dragons), and java.awt.datatransfer. And our first step in creating a drag and drop tool is handling dragging.
With dragging, we need to do the following: (a) determine if our mouse event is a drag event, then, if true, (b) create an object we transfer, and (c) start dragging. So in my search I came across this thing called DragGestureRecognizer, and its subclass, MouseDragGestureRecognizer.
And immediately we encounter a violation of the first rule of making an application framework–a violation which is very common with intermediate-level developers: they engineered it more complicated than it needed to be. Why the fuck have an abstract drag gesture recognizer engine which is then enstantiated with a concrete class? Does Sun think that at some time in the future drag recognition is going to be triggered by a network event? Do they anticipate a TCP/IP packet which starts a drag event?
Drag and drop is not a general-purpose user interface thingy; it is specifically an action that is triggered by a mouse gesture (or the equivalent of a mouse gesture). Conceptually if you press at a given location with your mouse (finger/tablet pen) and hold for a moment, you pick something up, and then you drag it somewhere else and you drop it. You don’t trigger this sequence through keyboard actions (unless your keyboard is emulating a mouse), nor do you trigger this sequence through a TCP/IP network connection or by manipulating the knobs on the front of the computer.
So the first rule of frameworks is violated: don’t make it overly generalized unless it needs to be.
So, now that we know that we somehow need this gesture recognizer to, well, recognize a gesture, how do we plug this in?
Not into mousePressed, we don’t. Instead, when we construct our JComponent, we have to initialize the DragGestureRecognizer for our component and pass a reference to our component to this DragGestureRecognizer, which listens to mouse events in the background. Well, that’s convenient: instead of adding like three likes of code to our mousePressed routine, we add three lines of code to our constructor and like magic, our mouse presses are automatically translated into drag events.
Except now we have a problem. Because this effectively happens asynchronously to our mouse down/mouse move events (because we keep receiving these events as the drag gesture recognizer is figuring out if this is a drag event), we have the odd problem that we continue receiving mouse events unaware or uncertain if this will be handled as a drag event. So we go along, implementing a ‘rectangle select’ operation–then, all of a sudden, our rectangle select event turns into a drag event.
The reason why it is preferable to handle triggering drags on mouse down is that we know right away if the event is a drag and drop event or not. Generally the rule for drag and drop is if the mouse stays within a small (5 pixel) rectangle for more than 1/4th of a second, the user is probably ‘grabbing’ the object to drag–so it’s not like that quarter second delay is going to be all that noticeable. Further, we can decide prior to asking if it is a drag event if the location where we are clicked is eligable for dragging in the first place.
But now we have this asynchronous operation taking place behind our back–a monster we have to feed. And while the solution is simple: on our click event decide if we are eligable for a drag event, and if we are, bail and allow the background drag code then handle things–this has two problems. First, it scatters the logic for handling drag and drop across multiple locations rather than keeping them in one spot–which makes maintanance a royal pain in the ass. Second, it means that if we select an object eligable for drag and drop, we have no way to implement secondary behavior if the user clicks and drags in a way which doesn’t fit in our drag and drop sematics.
Bah.
No problem, I suppose: I still have to decide if the click is eligable for drag and drop, and if it is, set a variable with a reference to the current mouse event we’re doing drag and drop with–because we need that event to set up drag and drop. By placing the drag and drop operation as an asynchronous operation, however, we suddenly find drag and drop going from “call this function and see if it’s drag and drop, then set up the drag object and run drag and drop” to implementing three separate interfaces and creating two classes.
I suppose two of those interfaces make a certain sort of sense, at least in the context where we have this background class “helping us”: if we’ve got this “helper class” helping us, we need an interface to tell the “helper” what it should do. And that’s the role of DragGestureListener: to listen for a drag gesture triggered event, and decide if it’s a valid drag operation, and trigger the drag if it is one. And the second interface, the Transferable, is something we’d need anyway: it’s the data being transfered.
The interface DragSupport, however, doesn’t seem to serve any purpose I can think of–except it does allow us to work around a bug in MacOS X’s implementation of Drag and Drop which doesn’t reset the mouse cursor when drag and drop ends. (*sigh*)
So, instead of overriding mousePressed, (a) determine if this is a drag event by calling some routine, (b) setting up drag, and (c) calling a routine to handle dragging, instead we (a) get an instance to the DragSource object, (b) create a new DragGestureRecognizer with our (c) interior class implementing the DragGestureListener interface and the DragSourceListener interface, which (of course) (d) fixes the MacOS X Java bug by setting the cursor in dragExit. Then we override mousePressed, (e) calculating if this is a drag event (which we’d have to do anyway), and (f) if so, setting some variable which our DragGestureListener can look at to see if it should then actually drag. In our DragGestureListener, we then (g) set up the drag and (h) call a routine to handle dragging.
Because 9 steps is easier than 3, natch.
Dropping
Now dropping should be an equally easy operation, and for the most part it is: you have a DropTargetListener which actually listens for drop events. And of course you still need to tell some global mechanism that your control accepts drop events: this appears to be the role of the DropTarget class. So we create a new DropTarget class, give it a reference to our component, and a reference to our inner class which implements the DropTargetListener class–and away we go.
So dropping isn’t as painful as dragging, I suppose.
Now if someone could tell me why Java took something that should have been two calls and turned it into two interfaces and two classes too many, I’d be greatful.
A network drag may be useful if some one drags an item from the screen of one computer to the screen of another. There are already applications that allow a mouse and keyboard to work across two computers by using the network. I don’t think network drags are that far fetched.
LikeLike
The only problem I see with it is that you need a protocol that allows the mouse and keyboard to work across two computers using a network–at which point, from the user’s perspective he’s clicking and dragging a virtual mouse. That is, in this case, what seems to me to be going on is that a virtual mouse protocol would be built on top of the network stack–and, in order to make it work, the mouse protocol would have to look like a regular (non-network) mouse protocol.
So in the end it would be a mouse event (granted, a mouse event that came across a network protocol) driving drag and drop.
LikeLike
I can’t believe it’s nearly 2014 and it’s still as awkward ….. great write up ….
LikeLike
Does current swing drag and drop framework support introducing configurable delay after mouse pressed?
LikeLike